00:00:00.001 Started by upstream project "autotest-per-patch" build number 132071 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.038 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.039 The recommended git tool is: git 00:00:00.039 using credential 00000000-0000-0000-0000-000000000002 00:00:00.042 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.063 Fetching changes from the remote Git repository 00:00:00.065 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.096 Using shallow fetch with depth 1 00:00:00.096 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.096 > git --version # timeout=10 00:00:00.147 > git --version # 'git version 2.39.2' 00:00:00.147 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.183 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.183 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.905 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.916 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.927 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:07.927 > git config core.sparsecheckout # timeout=10 00:00:07.937 > git read-tree -mu HEAD # timeout=10 00:00:07.953 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:07.970 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:07.970 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:08.053 [Pipeline] Start of Pipeline 00:00:08.069 [Pipeline] library 00:00:08.072 Loading library shm_lib@master 00:00:08.072 Library shm_lib@master is cached. Copying from home. 00:00:08.086 [Pipeline] node 00:00:08.097 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:08.098 [Pipeline] { 00:00:08.107 [Pipeline] catchError 00:00:08.109 [Pipeline] { 00:00:08.123 [Pipeline] wrap 00:00:08.132 [Pipeline] { 00:00:08.140 [Pipeline] stage 00:00:08.142 [Pipeline] { (Prologue) 00:00:08.159 [Pipeline] echo 00:00:08.160 Node: VM-host-WFP7 00:00:08.167 [Pipeline] cleanWs 00:00:08.176 [WS-CLEANUP] Deleting project workspace... 00:00:08.176 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.183 [WS-CLEANUP] done 00:00:08.377 [Pipeline] setCustomBuildProperty 00:00:08.465 [Pipeline] httpRequest 00:00:08.855 [Pipeline] echo 00:00:08.857 Sorcerer 10.211.164.101 is alive 00:00:08.867 [Pipeline] retry 00:00:08.869 [Pipeline] { 00:00:08.884 [Pipeline] httpRequest 00:00:08.889 HttpMethod: GET 00:00:08.889 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.890 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.911 Response Code: HTTP/1.1 200 OK 00:00:08.912 Success: Status code 200 is in the accepted range: 200,404 00:00:08.912 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:32.645 [Pipeline] } 00:00:32.656 [Pipeline] // retry 00:00:32.662 [Pipeline] sh 00:00:32.944 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:32.956 [Pipeline] httpRequest 00:00:33.731 [Pipeline] echo 00:00:33.732 Sorcerer 10.211.164.101 is alive 00:00:33.738 [Pipeline] retry 00:00:33.739 [Pipeline] { 00:00:33.750 [Pipeline] httpRequest 00:00:33.754 HttpMethod: GET 00:00:33.755 URL: http://10.211.164.101/packages/spdk_f2120392bc602cf43f1c355dd60038bf670d31b9.tar.gz 00:00:33.755 Sending request to url: http://10.211.164.101/packages/spdk_f2120392bc602cf43f1c355dd60038bf670d31b9.tar.gz 00:00:33.776 Response Code: HTTP/1.1 200 OK 00:00:33.776 Success: Status code 200 is in the accepted range: 200,404 00:00:33.776 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_f2120392bc602cf43f1c355dd60038bf670d31b9.tar.gz 00:01:56.201 [Pipeline] } 00:01:56.218 [Pipeline] // retry 00:01:56.226 [Pipeline] sh 00:01:56.509 + tar --no-same-owner -xf spdk_f2120392bc602cf43f1c355dd60038bf670d31b9.tar.gz 00:01:59.058 [Pipeline] sh 00:01:59.341 + git -C spdk log --oneline -n5 00:01:59.341 f2120392b test/scheduler: Account for multiple cpus in the affinity mask 00:01:59.341 a7a314bb5 test/nvmf: Tweak nvme_connect() 00:01:59.341 f220d590c nvmf: rename passthrough_nsid -> passthru_nsid 00:01:59.341 1a1586409 nvmf: use bdev's nsid for admin command passthru 00:01:59.341 892c29f49 nvmf: pass nsid to nvmf_ctrlr_identify_ns() 00:01:59.359 [Pipeline] writeFile 00:01:59.375 [Pipeline] sh 00:01:59.660 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:59.672 [Pipeline] sh 00:01:59.955 + cat autorun-spdk.conf 00:01:59.955 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:59.955 SPDK_RUN_ASAN=1 00:01:59.955 SPDK_RUN_UBSAN=1 00:01:59.955 SPDK_TEST_RAID=1 00:01:59.955 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:59.962 RUN_NIGHTLY=0 00:01:59.964 [Pipeline] } 00:01:59.977 [Pipeline] // stage 00:01:59.991 [Pipeline] stage 00:01:59.994 [Pipeline] { (Run VM) 00:02:00.007 [Pipeline] sh 00:02:00.288 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:00.288 + echo 'Start stage prepare_nvme.sh' 00:02:00.288 Start stage prepare_nvme.sh 00:02:00.288 + [[ -n 5 ]] 00:02:00.288 + disk_prefix=ex5 00:02:00.288 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:02:00.288 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:02:00.288 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:02:00.288 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:00.288 ++ SPDK_RUN_ASAN=1 00:02:00.288 ++ SPDK_RUN_UBSAN=1 00:02:00.288 ++ SPDK_TEST_RAID=1 00:02:00.288 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:00.288 ++ RUN_NIGHTLY=0 00:02:00.288 + cd /var/jenkins/workspace/raid-vg-autotest 00:02:00.288 + nvme_files=() 00:02:00.288 + declare -A nvme_files 00:02:00.288 + backend_dir=/var/lib/libvirt/images/backends 00:02:00.288 + nvme_files['nvme.img']=5G 00:02:00.288 + nvme_files['nvme-cmb.img']=5G 00:02:00.288 + nvme_files['nvme-multi0.img']=4G 00:02:00.288 + nvme_files['nvme-multi1.img']=4G 00:02:00.288 + nvme_files['nvme-multi2.img']=4G 00:02:00.288 + nvme_files['nvme-openstack.img']=8G 00:02:00.288 + nvme_files['nvme-zns.img']=5G 00:02:00.288 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:00.288 + (( SPDK_TEST_FTL == 1 )) 00:02:00.288 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:00.288 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:00.288 + for nvme in "${!nvme_files[@]}" 00:02:00.288 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:02:00.288 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:00.288 + for nvme in "${!nvme_files[@]}" 00:02:00.288 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:02:00.546 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:00.546 + for nvme in "${!nvme_files[@]}" 00:02:00.546 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:02:00.546 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:00.546 + for nvme in "${!nvme_files[@]}" 00:02:00.546 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:02:00.546 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:00.546 + for nvme in "${!nvme_files[@]}" 00:02:00.546 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:02:00.806 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:00.806 + for nvme in "${!nvme_files[@]}" 00:02:00.806 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:02:00.806 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:00.806 + for nvme in "${!nvme_files[@]}" 00:02:00.806 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:02:01.065 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:01.065 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:02:01.065 + echo 'End stage prepare_nvme.sh' 00:02:01.065 End stage prepare_nvme.sh 00:02:01.076 [Pipeline] sh 00:02:01.359 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:01.359 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 00:02:01.359 00:02:01.359 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:02:01.359 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:02:01.359 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:02:01.359 HELP=0 00:02:01.359 DRY_RUN=0 00:02:01.359 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:02:01.359 NVME_DISKS_TYPE=nvme,nvme, 00:02:01.359 NVME_AUTO_CREATE=0 00:02:01.359 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:02:01.359 NVME_CMB=,, 00:02:01.359 NVME_PMR=,, 00:02:01.359 NVME_ZNS=,, 00:02:01.359 NVME_MS=,, 00:02:01.359 NVME_FDP=,, 00:02:01.359 SPDK_VAGRANT_DISTRO=fedora39 00:02:01.359 SPDK_VAGRANT_VMCPU=10 00:02:01.359 SPDK_VAGRANT_VMRAM=12288 00:02:01.359 SPDK_VAGRANT_PROVIDER=libvirt 00:02:01.359 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:01.359 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:01.359 SPDK_OPENSTACK_NETWORK=0 00:02:01.359 VAGRANT_PACKAGE_BOX=0 00:02:01.359 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:01.359 FORCE_DISTRO=true 00:02:01.359 VAGRANT_BOX_VERSION= 00:02:01.359 EXTRA_VAGRANTFILES= 00:02:01.359 NIC_MODEL=virtio 00:02:01.359 00:02:01.359 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:02:01.359 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:02:03.924 Bringing machine 'default' up with 'libvirt' provider... 00:02:04.183 ==> default: Creating image (snapshot of base box volume). 00:02:04.443 ==> default: Creating domain with the following settings... 00:02:04.443 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1730823377_a0160684d10aa39c9328 00:02:04.443 ==> default: -- Domain type: kvm 00:02:04.443 ==> default: -- Cpus: 10 00:02:04.443 ==> default: -- Feature: acpi 00:02:04.443 ==> default: -- Feature: apic 00:02:04.443 ==> default: -- Feature: pae 00:02:04.443 ==> default: -- Memory: 12288M 00:02:04.443 ==> default: -- Memory Backing: hugepages: 00:02:04.443 ==> default: -- Management MAC: 00:02:04.443 ==> default: -- Loader: 00:02:04.443 ==> default: -- Nvram: 00:02:04.443 ==> default: -- Base box: spdk/fedora39 00:02:04.443 ==> default: -- Storage pool: default 00:02:04.443 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1730823377_a0160684d10aa39c9328.img (20G) 00:02:04.443 ==> default: -- Volume Cache: default 00:02:04.443 ==> default: -- Kernel: 00:02:04.443 ==> default: -- Initrd: 00:02:04.443 ==> default: -- Graphics Type: vnc 00:02:04.443 ==> default: -- Graphics Port: -1 00:02:04.443 ==> default: -- Graphics IP: 127.0.0.1 00:02:04.443 ==> default: -- Graphics Password: Not defined 00:02:04.443 ==> default: -- Video Type: cirrus 00:02:04.443 ==> default: -- Video VRAM: 9216 00:02:04.443 ==> default: -- Sound Type: 00:02:04.443 ==> default: -- Keymap: en-us 00:02:04.443 ==> default: -- TPM Path: 00:02:04.443 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:04.443 ==> default: -- Command line args: 00:02:04.443 ==> default: -> value=-device, 00:02:04.443 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:04.443 ==> default: -> value=-drive, 00:02:04.443 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:02:04.443 ==> default: -> value=-device, 00:02:04.443 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:04.443 ==> default: -> value=-device, 00:02:04.443 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:04.443 ==> default: -> value=-drive, 00:02:04.443 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:04.443 ==> default: -> value=-device, 00:02:04.443 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:04.443 ==> default: -> value=-drive, 00:02:04.443 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:04.443 ==> default: -> value=-device, 00:02:04.443 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:04.443 ==> default: -> value=-drive, 00:02:04.443 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:04.443 ==> default: -> value=-device, 00:02:04.443 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:04.443 ==> default: Creating shared folders metadata... 00:02:04.443 ==> default: Starting domain. 00:02:06.350 ==> default: Waiting for domain to get an IP address... 00:02:24.440 ==> default: Waiting for SSH to become available... 00:02:24.440 ==> default: Configuring and enabling network interfaces... 00:02:29.712 default: SSH address: 192.168.121.56:22 00:02:29.712 default: SSH username: vagrant 00:02:29.712 default: SSH auth method: private key 00:02:32.262 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:40.402 ==> default: Mounting SSHFS shared folder... 00:02:42.939 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:42.939 ==> default: Checking Mount.. 00:02:44.318 ==> default: Folder Successfully Mounted! 00:02:44.318 ==> default: Running provisioner: file... 00:02:45.254 default: ~/.gitconfig => .gitconfig 00:02:45.826 00:02:45.826 SUCCESS! 00:02:45.826 00:02:45.826 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:45.826 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:45.826 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:45.826 00:02:45.836 [Pipeline] } 00:02:45.850 [Pipeline] // stage 00:02:45.859 [Pipeline] dir 00:02:45.860 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:02:45.861 [Pipeline] { 00:02:45.874 [Pipeline] catchError 00:02:45.876 [Pipeline] { 00:02:45.888 [Pipeline] sh 00:02:46.171 + vagrant ssh-config --host vagrant 00:02:46.171 + sed -ne /^Host/,$p 00:02:46.171 + tee ssh_conf 00:02:48.707 Host vagrant 00:02:48.707 HostName 192.168.121.56 00:02:48.707 User vagrant 00:02:48.707 Port 22 00:02:48.707 UserKnownHostsFile /dev/null 00:02:48.707 StrictHostKeyChecking no 00:02:48.707 PasswordAuthentication no 00:02:48.707 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:48.707 IdentitiesOnly yes 00:02:48.707 LogLevel FATAL 00:02:48.707 ForwardAgent yes 00:02:48.707 ForwardX11 yes 00:02:48.707 00:02:48.721 [Pipeline] withEnv 00:02:48.723 [Pipeline] { 00:02:48.736 [Pipeline] sh 00:02:49.019 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:49.019 source /etc/os-release 00:02:49.019 [[ -e /image.version ]] && img=$(< /image.version) 00:02:49.019 # Minimal, systemd-like check. 00:02:49.019 if [[ -e /.dockerenv ]]; then 00:02:49.019 # Clear garbage from the node's name: 00:02:49.019 # agt-er_autotest_547-896 -> autotest_547-896 00:02:49.019 # $HOSTNAME is the actual container id 00:02:49.019 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:49.019 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:49.019 # We can assume this is a mount from a host where container is running, 00:02:49.019 # so fetch its hostname to easily identify the target swarm worker. 00:02:49.019 container="$(< /etc/hostname) ($agent)" 00:02:49.019 else 00:02:49.019 # Fallback 00:02:49.019 container=$agent 00:02:49.019 fi 00:02:49.019 fi 00:02:49.019 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:49.019 00:02:49.287 [Pipeline] } 00:02:49.303 [Pipeline] // withEnv 00:02:49.310 [Pipeline] setCustomBuildProperty 00:02:49.323 [Pipeline] stage 00:02:49.325 [Pipeline] { (Tests) 00:02:49.339 [Pipeline] sh 00:02:49.621 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:49.895 [Pipeline] sh 00:02:50.178 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:50.452 [Pipeline] timeout 00:02:50.452 Timeout set to expire in 1 hr 30 min 00:02:50.454 [Pipeline] { 00:02:50.468 [Pipeline] sh 00:02:50.749 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:51.318 HEAD is now at f2120392b test/scheduler: Account for multiple cpus in the affinity mask 00:02:51.330 [Pipeline] sh 00:02:51.621 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:51.905 [Pipeline] sh 00:02:52.188 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:52.465 [Pipeline] sh 00:02:52.748 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:53.008 ++ readlink -f spdk_repo 00:02:53.008 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:53.008 + [[ -n /home/vagrant/spdk_repo ]] 00:02:53.008 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:53.008 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:53.008 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:53.008 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:53.008 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:53.008 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:53.008 + cd /home/vagrant/spdk_repo 00:02:53.008 + source /etc/os-release 00:02:53.008 ++ NAME='Fedora Linux' 00:02:53.008 ++ VERSION='39 (Cloud Edition)' 00:02:53.008 ++ ID=fedora 00:02:53.008 ++ VERSION_ID=39 00:02:53.008 ++ VERSION_CODENAME= 00:02:53.008 ++ PLATFORM_ID=platform:f39 00:02:53.008 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:53.008 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:53.008 ++ LOGO=fedora-logo-icon 00:02:53.008 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:53.008 ++ HOME_URL=https://fedoraproject.org/ 00:02:53.008 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:53.008 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:53.008 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:53.008 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:53.008 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:53.008 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:53.008 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:53.008 ++ SUPPORT_END=2024-11-12 00:02:53.008 ++ VARIANT='Cloud Edition' 00:02:53.008 ++ VARIANT_ID=cloud 00:02:53.008 + uname -a 00:02:53.008 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:53.008 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:53.577 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:53.577 Hugepages 00:02:53.577 node hugesize free / total 00:02:53.577 node0 1048576kB 0 / 0 00:02:53.577 node0 2048kB 0 / 0 00:02:53.577 00:02:53.577 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:53.577 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:53.577 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:53.577 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:53.577 + rm -f /tmp/spdk-ld-path 00:02:53.577 + source autorun-spdk.conf 00:02:53.578 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:53.578 ++ SPDK_RUN_ASAN=1 00:02:53.578 ++ SPDK_RUN_UBSAN=1 00:02:53.578 ++ SPDK_TEST_RAID=1 00:02:53.578 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:53.578 ++ RUN_NIGHTLY=0 00:02:53.578 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:53.578 + [[ -n '' ]] 00:02:53.578 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:53.838 + for M in /var/spdk/build-*-manifest.txt 00:02:53.838 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:53.838 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:53.838 + for M in /var/spdk/build-*-manifest.txt 00:02:53.838 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:53.838 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:53.838 + for M in /var/spdk/build-*-manifest.txt 00:02:53.838 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:53.838 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:53.838 ++ uname 00:02:53.838 + [[ Linux == \L\i\n\u\x ]] 00:02:53.838 + sudo dmesg -T 00:02:53.838 + sudo dmesg --clear 00:02:53.838 + dmesg_pid=5427 00:02:53.838 + sudo dmesg -Tw 00:02:53.838 + [[ Fedora Linux == FreeBSD ]] 00:02:53.838 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:53.838 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:53.838 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:53.838 + [[ -x /usr/src/fio-static/fio ]] 00:02:53.838 + export FIO_BIN=/usr/src/fio-static/fio 00:02:53.838 + FIO_BIN=/usr/src/fio-static/fio 00:02:53.838 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:53.838 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:53.838 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:53.838 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:53.838 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:53.838 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:53.838 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:53.838 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:53.838 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:53.838 16:17:06 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:53.838 16:17:06 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:53.838 16:17:06 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:53.838 16:17:06 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:02:53.838 16:17:06 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:02:53.838 16:17:06 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:02:53.838 16:17:06 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:53.838 16:17:06 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:02:53.838 16:17:06 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:53.838 16:17:06 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:54.098 16:17:06 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:54.098 16:17:07 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:54.098 16:17:07 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:54.098 16:17:07 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:54.098 16:17:07 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:54.098 16:17:07 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:54.098 16:17:07 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:54.098 16:17:07 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:54.098 16:17:07 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:54.098 16:17:07 -- paths/export.sh@5 -- $ export PATH 00:02:54.098 16:17:07 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:54.098 16:17:07 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:54.098 16:17:07 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:54.098 16:17:07 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730823427.XXXXXX 00:02:54.098 16:17:07 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730823427.03CZav 00:02:54.098 16:17:07 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:54.098 16:17:07 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:54.098 16:17:07 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:54.098 16:17:07 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:54.098 16:17:07 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:54.098 16:17:07 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:54.098 16:17:07 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:54.098 16:17:07 -- common/autotest_common.sh@10 -- $ set +x 00:02:54.098 16:17:07 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:02:54.098 16:17:07 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:54.098 16:17:07 -- pm/common@17 -- $ local monitor 00:02:54.098 16:17:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:54.098 16:17:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:54.098 16:17:07 -- pm/common@21 -- $ date +%s 00:02:54.098 16:17:07 -- pm/common@25 -- $ sleep 1 00:02:54.098 16:17:07 -- pm/common@21 -- $ date +%s 00:02:54.098 16:17:07 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730823427 00:02:54.098 16:17:07 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730823427 00:02:54.098 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730823427_collect-vmstat.pm.log 00:02:54.098 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730823427_collect-cpu-load.pm.log 00:02:55.036 16:17:08 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:55.036 16:17:08 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:55.036 16:17:08 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:55.036 16:17:08 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:55.036 16:17:08 -- spdk/autobuild.sh@16 -- $ date -u 00:02:55.036 Tue Nov 5 04:17:08 PM UTC 2024 00:02:55.036 16:17:08 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:55.036 v25.01-pre-160-gf2120392b 00:02:55.036 16:17:08 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:55.036 16:17:08 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:55.036 16:17:08 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:55.036 16:17:08 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:55.036 16:17:08 -- common/autotest_common.sh@10 -- $ set +x 00:02:55.036 ************************************ 00:02:55.036 START TEST asan 00:02:55.036 ************************************ 00:02:55.036 using asan 00:02:55.036 16:17:08 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:02:55.036 00:02:55.036 real 0m0.001s 00:02:55.036 user 0m0.000s 00:02:55.036 sys 0m0.000s 00:02:55.036 16:17:08 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:55.036 16:17:08 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:55.036 ************************************ 00:02:55.036 END TEST asan 00:02:55.036 ************************************ 00:02:55.296 16:17:08 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:55.296 16:17:08 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:55.296 16:17:08 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:55.296 16:17:08 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:55.296 16:17:08 -- common/autotest_common.sh@10 -- $ set +x 00:02:55.296 ************************************ 00:02:55.296 START TEST ubsan 00:02:55.296 ************************************ 00:02:55.296 using ubsan 00:02:55.296 16:17:08 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:02:55.296 00:02:55.296 real 0m0.000s 00:02:55.296 user 0m0.000s 00:02:55.296 sys 0m0.000s 00:02:55.296 16:17:08 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:55.296 16:17:08 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:55.296 ************************************ 00:02:55.296 END TEST ubsan 00:02:55.296 ************************************ 00:02:55.296 16:17:08 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:55.296 16:17:08 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:55.296 16:17:08 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:55.296 16:17:08 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:55.296 16:17:08 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:55.296 16:17:08 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:55.296 16:17:08 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:55.296 16:17:08 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:55.296 16:17:08 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:02:55.296 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:55.296 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:55.865 Using 'verbs' RDMA provider 00:03:12.164 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:30.281 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:30.281 Creating mk/config.mk...done. 00:03:30.281 Creating mk/cc.flags.mk...done. 00:03:30.281 Type 'make' to build. 00:03:30.281 16:17:41 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:30.281 16:17:41 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:03:30.281 16:17:41 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:03:30.281 16:17:41 -- common/autotest_common.sh@10 -- $ set +x 00:03:30.281 ************************************ 00:03:30.281 START TEST make 00:03:30.281 ************************************ 00:03:30.281 16:17:41 make -- common/autotest_common.sh@1127 -- $ make -j10 00:03:30.281 make[1]: Nothing to be done for 'all'. 00:03:40.264 The Meson build system 00:03:40.265 Version: 1.5.0 00:03:40.265 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:40.265 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:40.265 Build type: native build 00:03:40.265 Program cat found: YES (/usr/bin/cat) 00:03:40.265 Project name: DPDK 00:03:40.265 Project version: 24.03.0 00:03:40.265 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:40.265 C linker for the host machine: cc ld.bfd 2.40-14 00:03:40.265 Host machine cpu family: x86_64 00:03:40.265 Host machine cpu: x86_64 00:03:40.265 Message: ## Building in Developer Mode ## 00:03:40.265 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:40.265 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:40.265 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:40.265 Program python3 found: YES (/usr/bin/python3) 00:03:40.265 Program cat found: YES (/usr/bin/cat) 00:03:40.265 Compiler for C supports arguments -march=native: YES 00:03:40.265 Checking for size of "void *" : 8 00:03:40.265 Checking for size of "void *" : 8 (cached) 00:03:40.265 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:40.265 Library m found: YES 00:03:40.265 Library numa found: YES 00:03:40.265 Has header "numaif.h" : YES 00:03:40.265 Library fdt found: NO 00:03:40.265 Library execinfo found: NO 00:03:40.265 Has header "execinfo.h" : YES 00:03:40.265 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:40.265 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:40.265 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:40.265 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:40.265 Run-time dependency openssl found: YES 3.1.1 00:03:40.265 Run-time dependency libpcap found: YES 1.10.4 00:03:40.265 Has header "pcap.h" with dependency libpcap: YES 00:03:40.265 Compiler for C supports arguments -Wcast-qual: YES 00:03:40.265 Compiler for C supports arguments -Wdeprecated: YES 00:03:40.265 Compiler for C supports arguments -Wformat: YES 00:03:40.265 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:40.265 Compiler for C supports arguments -Wformat-security: NO 00:03:40.265 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:40.265 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:40.265 Compiler for C supports arguments -Wnested-externs: YES 00:03:40.265 Compiler for C supports arguments -Wold-style-definition: YES 00:03:40.265 Compiler for C supports arguments -Wpointer-arith: YES 00:03:40.265 Compiler for C supports arguments -Wsign-compare: YES 00:03:40.265 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:40.265 Compiler for C supports arguments -Wundef: YES 00:03:40.265 Compiler for C supports arguments -Wwrite-strings: YES 00:03:40.265 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:40.265 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:40.265 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:40.265 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:40.265 Program objdump found: YES (/usr/bin/objdump) 00:03:40.265 Compiler for C supports arguments -mavx512f: YES 00:03:40.265 Checking if "AVX512 checking" compiles: YES 00:03:40.265 Fetching value of define "__SSE4_2__" : 1 00:03:40.265 Fetching value of define "__AES__" : 1 00:03:40.265 Fetching value of define "__AVX__" : 1 00:03:40.265 Fetching value of define "__AVX2__" : 1 00:03:40.265 Fetching value of define "__AVX512BW__" : 1 00:03:40.265 Fetching value of define "__AVX512CD__" : 1 00:03:40.265 Fetching value of define "__AVX512DQ__" : 1 00:03:40.265 Fetching value of define "__AVX512F__" : 1 00:03:40.265 Fetching value of define "__AVX512VL__" : 1 00:03:40.265 Fetching value of define "__PCLMUL__" : 1 00:03:40.265 Fetching value of define "__RDRND__" : 1 00:03:40.265 Fetching value of define "__RDSEED__" : 1 00:03:40.265 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:40.265 Fetching value of define "__znver1__" : (undefined) 00:03:40.265 Fetching value of define "__znver2__" : (undefined) 00:03:40.265 Fetching value of define "__znver3__" : (undefined) 00:03:40.265 Fetching value of define "__znver4__" : (undefined) 00:03:40.265 Library asan found: YES 00:03:40.265 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:40.265 Message: lib/log: Defining dependency "log" 00:03:40.265 Message: lib/kvargs: Defining dependency "kvargs" 00:03:40.265 Message: lib/telemetry: Defining dependency "telemetry" 00:03:40.265 Library rt found: YES 00:03:40.265 Checking for function "getentropy" : NO 00:03:40.265 Message: lib/eal: Defining dependency "eal" 00:03:40.265 Message: lib/ring: Defining dependency "ring" 00:03:40.265 Message: lib/rcu: Defining dependency "rcu" 00:03:40.265 Message: lib/mempool: Defining dependency "mempool" 00:03:40.265 Message: lib/mbuf: Defining dependency "mbuf" 00:03:40.265 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:40.265 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:40.265 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:40.265 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:40.265 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:40.265 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:40.265 Compiler for C supports arguments -mpclmul: YES 00:03:40.265 Compiler for C supports arguments -maes: YES 00:03:40.265 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:40.265 Compiler for C supports arguments -mavx512bw: YES 00:03:40.265 Compiler for C supports arguments -mavx512dq: YES 00:03:40.265 Compiler for C supports arguments -mavx512vl: YES 00:03:40.265 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:40.265 Compiler for C supports arguments -mavx2: YES 00:03:40.265 Compiler for C supports arguments -mavx: YES 00:03:40.265 Message: lib/net: Defining dependency "net" 00:03:40.265 Message: lib/meter: Defining dependency "meter" 00:03:40.265 Message: lib/ethdev: Defining dependency "ethdev" 00:03:40.265 Message: lib/pci: Defining dependency "pci" 00:03:40.265 Message: lib/cmdline: Defining dependency "cmdline" 00:03:40.265 Message: lib/hash: Defining dependency "hash" 00:03:40.265 Message: lib/timer: Defining dependency "timer" 00:03:40.265 Message: lib/compressdev: Defining dependency "compressdev" 00:03:40.265 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:40.265 Message: lib/dmadev: Defining dependency "dmadev" 00:03:40.265 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:40.265 Message: lib/power: Defining dependency "power" 00:03:40.265 Message: lib/reorder: Defining dependency "reorder" 00:03:40.265 Message: lib/security: Defining dependency "security" 00:03:40.265 Has header "linux/userfaultfd.h" : YES 00:03:40.265 Has header "linux/vduse.h" : YES 00:03:40.265 Message: lib/vhost: Defining dependency "vhost" 00:03:40.265 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:40.265 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:40.265 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:40.265 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:40.265 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:40.265 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:40.265 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:40.265 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:40.265 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:40.265 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:40.265 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:40.265 Configuring doxy-api-html.conf using configuration 00:03:40.265 Configuring doxy-api-man.conf using configuration 00:03:40.265 Program mandb found: YES (/usr/bin/mandb) 00:03:40.265 Program sphinx-build found: NO 00:03:40.265 Configuring rte_build_config.h using configuration 00:03:40.265 Message: 00:03:40.265 ================= 00:03:40.265 Applications Enabled 00:03:40.265 ================= 00:03:40.265 00:03:40.265 apps: 00:03:40.265 00:03:40.265 00:03:40.265 Message: 00:03:40.265 ================= 00:03:40.265 Libraries Enabled 00:03:40.265 ================= 00:03:40.265 00:03:40.265 libs: 00:03:40.265 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:40.265 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:40.265 cryptodev, dmadev, power, reorder, security, vhost, 00:03:40.265 00:03:40.265 Message: 00:03:40.265 =============== 00:03:40.265 Drivers Enabled 00:03:40.265 =============== 00:03:40.265 00:03:40.265 common: 00:03:40.265 00:03:40.265 bus: 00:03:40.265 pci, vdev, 00:03:40.265 mempool: 00:03:40.265 ring, 00:03:40.265 dma: 00:03:40.265 00:03:40.265 net: 00:03:40.265 00:03:40.265 crypto: 00:03:40.265 00:03:40.265 compress: 00:03:40.265 00:03:40.265 vdpa: 00:03:40.265 00:03:40.265 00:03:40.265 Message: 00:03:40.265 ================= 00:03:40.265 Content Skipped 00:03:40.265 ================= 00:03:40.265 00:03:40.265 apps: 00:03:40.265 dumpcap: explicitly disabled via build config 00:03:40.265 graph: explicitly disabled via build config 00:03:40.265 pdump: explicitly disabled via build config 00:03:40.265 proc-info: explicitly disabled via build config 00:03:40.265 test-acl: explicitly disabled via build config 00:03:40.265 test-bbdev: explicitly disabled via build config 00:03:40.265 test-cmdline: explicitly disabled via build config 00:03:40.265 test-compress-perf: explicitly disabled via build config 00:03:40.265 test-crypto-perf: explicitly disabled via build config 00:03:40.265 test-dma-perf: explicitly disabled via build config 00:03:40.265 test-eventdev: explicitly disabled via build config 00:03:40.265 test-fib: explicitly disabled via build config 00:03:40.265 test-flow-perf: explicitly disabled via build config 00:03:40.265 test-gpudev: explicitly disabled via build config 00:03:40.265 test-mldev: explicitly disabled via build config 00:03:40.265 test-pipeline: explicitly disabled via build config 00:03:40.265 test-pmd: explicitly disabled via build config 00:03:40.265 test-regex: explicitly disabled via build config 00:03:40.265 test-sad: explicitly disabled via build config 00:03:40.265 test-security-perf: explicitly disabled via build config 00:03:40.265 00:03:40.265 libs: 00:03:40.265 argparse: explicitly disabled via build config 00:03:40.266 metrics: explicitly disabled via build config 00:03:40.266 acl: explicitly disabled via build config 00:03:40.266 bbdev: explicitly disabled via build config 00:03:40.266 bitratestats: explicitly disabled via build config 00:03:40.266 bpf: explicitly disabled via build config 00:03:40.266 cfgfile: explicitly disabled via build config 00:03:40.266 distributor: explicitly disabled via build config 00:03:40.266 efd: explicitly disabled via build config 00:03:40.266 eventdev: explicitly disabled via build config 00:03:40.266 dispatcher: explicitly disabled via build config 00:03:40.266 gpudev: explicitly disabled via build config 00:03:40.266 gro: explicitly disabled via build config 00:03:40.266 gso: explicitly disabled via build config 00:03:40.266 ip_frag: explicitly disabled via build config 00:03:40.266 jobstats: explicitly disabled via build config 00:03:40.266 latencystats: explicitly disabled via build config 00:03:40.266 lpm: explicitly disabled via build config 00:03:40.266 member: explicitly disabled via build config 00:03:40.266 pcapng: explicitly disabled via build config 00:03:40.266 rawdev: explicitly disabled via build config 00:03:40.266 regexdev: explicitly disabled via build config 00:03:40.266 mldev: explicitly disabled via build config 00:03:40.266 rib: explicitly disabled via build config 00:03:40.266 sched: explicitly disabled via build config 00:03:40.266 stack: explicitly disabled via build config 00:03:40.266 ipsec: explicitly disabled via build config 00:03:40.266 pdcp: explicitly disabled via build config 00:03:40.266 fib: explicitly disabled via build config 00:03:40.266 port: explicitly disabled via build config 00:03:40.266 pdump: explicitly disabled via build config 00:03:40.266 table: explicitly disabled via build config 00:03:40.266 pipeline: explicitly disabled via build config 00:03:40.266 graph: explicitly disabled via build config 00:03:40.266 node: explicitly disabled via build config 00:03:40.266 00:03:40.266 drivers: 00:03:40.266 common/cpt: not in enabled drivers build config 00:03:40.266 common/dpaax: not in enabled drivers build config 00:03:40.266 common/iavf: not in enabled drivers build config 00:03:40.266 common/idpf: not in enabled drivers build config 00:03:40.266 common/ionic: not in enabled drivers build config 00:03:40.266 common/mvep: not in enabled drivers build config 00:03:40.266 common/octeontx: not in enabled drivers build config 00:03:40.266 bus/auxiliary: not in enabled drivers build config 00:03:40.266 bus/cdx: not in enabled drivers build config 00:03:40.266 bus/dpaa: not in enabled drivers build config 00:03:40.266 bus/fslmc: not in enabled drivers build config 00:03:40.266 bus/ifpga: not in enabled drivers build config 00:03:40.266 bus/platform: not in enabled drivers build config 00:03:40.266 bus/uacce: not in enabled drivers build config 00:03:40.266 bus/vmbus: not in enabled drivers build config 00:03:40.266 common/cnxk: not in enabled drivers build config 00:03:40.266 common/mlx5: not in enabled drivers build config 00:03:40.266 common/nfp: not in enabled drivers build config 00:03:40.266 common/nitrox: not in enabled drivers build config 00:03:40.266 common/qat: not in enabled drivers build config 00:03:40.266 common/sfc_efx: not in enabled drivers build config 00:03:40.266 mempool/bucket: not in enabled drivers build config 00:03:40.266 mempool/cnxk: not in enabled drivers build config 00:03:40.266 mempool/dpaa: not in enabled drivers build config 00:03:40.266 mempool/dpaa2: not in enabled drivers build config 00:03:40.266 mempool/octeontx: not in enabled drivers build config 00:03:40.266 mempool/stack: not in enabled drivers build config 00:03:40.266 dma/cnxk: not in enabled drivers build config 00:03:40.266 dma/dpaa: not in enabled drivers build config 00:03:40.266 dma/dpaa2: not in enabled drivers build config 00:03:40.266 dma/hisilicon: not in enabled drivers build config 00:03:40.266 dma/idxd: not in enabled drivers build config 00:03:40.266 dma/ioat: not in enabled drivers build config 00:03:40.266 dma/skeleton: not in enabled drivers build config 00:03:40.266 net/af_packet: not in enabled drivers build config 00:03:40.266 net/af_xdp: not in enabled drivers build config 00:03:40.266 net/ark: not in enabled drivers build config 00:03:40.266 net/atlantic: not in enabled drivers build config 00:03:40.266 net/avp: not in enabled drivers build config 00:03:40.266 net/axgbe: not in enabled drivers build config 00:03:40.266 net/bnx2x: not in enabled drivers build config 00:03:40.266 net/bnxt: not in enabled drivers build config 00:03:40.266 net/bonding: not in enabled drivers build config 00:03:40.266 net/cnxk: not in enabled drivers build config 00:03:40.266 net/cpfl: not in enabled drivers build config 00:03:40.266 net/cxgbe: not in enabled drivers build config 00:03:40.266 net/dpaa: not in enabled drivers build config 00:03:40.266 net/dpaa2: not in enabled drivers build config 00:03:40.266 net/e1000: not in enabled drivers build config 00:03:40.266 net/ena: not in enabled drivers build config 00:03:40.266 net/enetc: not in enabled drivers build config 00:03:40.266 net/enetfec: not in enabled drivers build config 00:03:40.266 net/enic: not in enabled drivers build config 00:03:40.266 net/failsafe: not in enabled drivers build config 00:03:40.266 net/fm10k: not in enabled drivers build config 00:03:40.266 net/gve: not in enabled drivers build config 00:03:40.266 net/hinic: not in enabled drivers build config 00:03:40.266 net/hns3: not in enabled drivers build config 00:03:40.266 net/i40e: not in enabled drivers build config 00:03:40.266 net/iavf: not in enabled drivers build config 00:03:40.266 net/ice: not in enabled drivers build config 00:03:40.266 net/idpf: not in enabled drivers build config 00:03:40.266 net/igc: not in enabled drivers build config 00:03:40.266 net/ionic: not in enabled drivers build config 00:03:40.266 net/ipn3ke: not in enabled drivers build config 00:03:40.266 net/ixgbe: not in enabled drivers build config 00:03:40.266 net/mana: not in enabled drivers build config 00:03:40.266 net/memif: not in enabled drivers build config 00:03:40.266 net/mlx4: not in enabled drivers build config 00:03:40.266 net/mlx5: not in enabled drivers build config 00:03:40.266 net/mvneta: not in enabled drivers build config 00:03:40.266 net/mvpp2: not in enabled drivers build config 00:03:40.266 net/netvsc: not in enabled drivers build config 00:03:40.266 net/nfb: not in enabled drivers build config 00:03:40.266 net/nfp: not in enabled drivers build config 00:03:40.266 net/ngbe: not in enabled drivers build config 00:03:40.266 net/null: not in enabled drivers build config 00:03:40.266 net/octeontx: not in enabled drivers build config 00:03:40.266 net/octeon_ep: not in enabled drivers build config 00:03:40.266 net/pcap: not in enabled drivers build config 00:03:40.266 net/pfe: not in enabled drivers build config 00:03:40.266 net/qede: not in enabled drivers build config 00:03:40.266 net/ring: not in enabled drivers build config 00:03:40.266 net/sfc: not in enabled drivers build config 00:03:40.266 net/softnic: not in enabled drivers build config 00:03:40.266 net/tap: not in enabled drivers build config 00:03:40.266 net/thunderx: not in enabled drivers build config 00:03:40.266 net/txgbe: not in enabled drivers build config 00:03:40.266 net/vdev_netvsc: not in enabled drivers build config 00:03:40.266 net/vhost: not in enabled drivers build config 00:03:40.266 net/virtio: not in enabled drivers build config 00:03:40.266 net/vmxnet3: not in enabled drivers build config 00:03:40.266 raw/*: missing internal dependency, "rawdev" 00:03:40.266 crypto/armv8: not in enabled drivers build config 00:03:40.266 crypto/bcmfs: not in enabled drivers build config 00:03:40.266 crypto/caam_jr: not in enabled drivers build config 00:03:40.266 crypto/ccp: not in enabled drivers build config 00:03:40.266 crypto/cnxk: not in enabled drivers build config 00:03:40.266 crypto/dpaa_sec: not in enabled drivers build config 00:03:40.266 crypto/dpaa2_sec: not in enabled drivers build config 00:03:40.266 crypto/ipsec_mb: not in enabled drivers build config 00:03:40.266 crypto/mlx5: not in enabled drivers build config 00:03:40.266 crypto/mvsam: not in enabled drivers build config 00:03:40.266 crypto/nitrox: not in enabled drivers build config 00:03:40.266 crypto/null: not in enabled drivers build config 00:03:40.266 crypto/octeontx: not in enabled drivers build config 00:03:40.266 crypto/openssl: not in enabled drivers build config 00:03:40.266 crypto/scheduler: not in enabled drivers build config 00:03:40.266 crypto/uadk: not in enabled drivers build config 00:03:40.266 crypto/virtio: not in enabled drivers build config 00:03:40.266 compress/isal: not in enabled drivers build config 00:03:40.266 compress/mlx5: not in enabled drivers build config 00:03:40.266 compress/nitrox: not in enabled drivers build config 00:03:40.266 compress/octeontx: not in enabled drivers build config 00:03:40.266 compress/zlib: not in enabled drivers build config 00:03:40.266 regex/*: missing internal dependency, "regexdev" 00:03:40.266 ml/*: missing internal dependency, "mldev" 00:03:40.266 vdpa/ifc: not in enabled drivers build config 00:03:40.266 vdpa/mlx5: not in enabled drivers build config 00:03:40.266 vdpa/nfp: not in enabled drivers build config 00:03:40.266 vdpa/sfc: not in enabled drivers build config 00:03:40.266 event/*: missing internal dependency, "eventdev" 00:03:40.266 baseband/*: missing internal dependency, "bbdev" 00:03:40.266 gpu/*: missing internal dependency, "gpudev" 00:03:40.266 00:03:40.266 00:03:40.266 Build targets in project: 85 00:03:40.266 00:03:40.266 DPDK 24.03.0 00:03:40.266 00:03:40.266 User defined options 00:03:40.266 buildtype : debug 00:03:40.266 default_library : shared 00:03:40.266 libdir : lib 00:03:40.266 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:40.266 b_sanitize : address 00:03:40.266 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:40.266 c_link_args : 00:03:40.266 cpu_instruction_set: native 00:03:40.266 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:40.266 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:40.266 enable_docs : false 00:03:40.266 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:40.266 enable_kmods : false 00:03:40.267 max_lcores : 128 00:03:40.267 tests : false 00:03:40.267 00:03:40.267 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:40.835 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:41.094 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:41.094 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:41.094 [3/268] Linking static target lib/librte_kvargs.a 00:03:41.094 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:41.094 [5/268] Linking static target lib/librte_log.a 00:03:41.094 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:41.353 [7/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:41.353 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:41.353 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:41.614 [10/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.614 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:41.615 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:41.615 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:41.615 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:41.615 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:41.615 [16/268] Linking static target lib/librte_telemetry.a 00:03:41.876 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:41.876 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:41.876 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.135 [20/268] Linking target lib/librte_log.so.24.1 00:03:42.135 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:42.135 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:42.135 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:42.135 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:42.135 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:42.393 [26/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:42.393 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:42.393 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:42.393 [29/268] Linking target lib/librte_kvargs.so.24.1 00:03:42.393 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:42.651 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:42.651 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.651 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:42.651 [34/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:42.651 [35/268] Linking target lib/librte_telemetry.so.24.1 00:03:42.909 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:42.909 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:42.909 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:42.909 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:42.909 [40/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:42.909 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:42.909 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:42.909 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:43.167 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:43.167 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:43.167 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:43.426 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:43.426 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:43.426 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:43.426 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:43.686 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:43.686 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:43.686 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:43.686 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:43.946 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:43.946 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:43.946 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:43.946 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:44.205 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:44.205 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:44.205 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:44.205 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:44.205 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:44.205 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:44.464 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:44.464 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:44.723 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:44.723 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:44.723 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:44.983 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:44.983 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:44.983 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:44.983 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:44.983 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:44.983 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:44.983 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:45.242 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:45.242 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:45.242 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:45.500 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:45.500 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:45.500 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:45.500 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:45.500 [84/268] Linking static target lib/librte_ring.a 00:03:45.500 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:45.760 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:45.760 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:45.760 [88/268] Linking static target lib/librte_eal.a 00:03:45.760 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:46.019 [90/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:46.019 [91/268] Linking static target lib/librte_rcu.a 00:03:46.019 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:46.019 [93/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.019 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:46.019 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:46.019 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:46.279 [97/268] Linking static target lib/librte_mempool.a 00:03:46.279 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:46.279 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:46.279 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:46.538 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.538 [102/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:46.538 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:46.539 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:46.539 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:46.539 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:46.798 [107/268] Linking static target lib/librte_net.a 00:03:46.798 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:46.798 [109/268] Linking static target lib/librte_meter.a 00:03:46.798 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:46.798 [111/268] Linking static target lib/librte_mbuf.a 00:03:47.059 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:47.059 [113/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.059 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:47.318 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.318 [116/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.577 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:47.577 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:47.577 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:47.836 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:47.836 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:47.836 [122/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.836 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:48.096 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:48.096 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:48.355 [126/268] Linking static target lib/librte_pci.a 00:03:48.355 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:48.355 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:48.355 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:48.355 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:48.355 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:48.614 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:48.614 [133/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.614 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:48.614 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:48.614 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:48.874 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:48.874 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:48.874 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:48.874 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:48.874 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:48.874 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:48.874 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:48.875 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:48.875 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:48.875 [146/268] Linking static target lib/librte_cmdline.a 00:03:49.133 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:49.393 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:49.393 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:49.393 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:49.393 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:49.393 [152/268] Linking static target lib/librte_timer.a 00:03:49.960 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:49.960 [154/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:49.960 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:49.960 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:49.960 [157/268] Linking static target lib/librte_compressdev.a 00:03:50.220 [158/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:50.220 [159/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.220 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:50.479 [161/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:50.479 [162/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:50.479 [163/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:50.479 [164/268] Linking static target lib/librte_hash.a 00:03:50.479 [165/268] Linking static target lib/librte_ethdev.a 00:03:50.479 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:50.739 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:50.739 [168/268] Linking static target lib/librte_dmadev.a 00:03:50.739 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:50.739 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.739 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:51.016 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:51.016 [173/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:51.016 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.296 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:51.296 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:51.296 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:51.556 [178/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.556 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:51.556 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:51.556 [181/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.556 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:51.556 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:51.816 [184/268] Linking static target lib/librte_cryptodev.a 00:03:51.816 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:51.816 [186/268] Linking static target lib/librte_power.a 00:03:51.816 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:52.075 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:52.075 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:52.075 [190/268] Linking static target lib/librte_security.a 00:03:52.335 [191/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:52.335 [192/268] Linking static target lib/librte_reorder.a 00:03:52.335 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:52.595 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:52.854 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:53.114 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:53.114 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:53.114 [198/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:53.114 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:53.114 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:53.374 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:53.634 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:53.634 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:53.634 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:53.894 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:53.894 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:53.894 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:53.894 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:54.153 [209/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:54.153 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:54.153 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:54.153 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:54.153 [213/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:54.153 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:54.412 [215/268] Linking static target drivers/librte_bus_vdev.a 00:03:54.412 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:54.412 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:54.412 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:54.412 [219/268] Linking static target drivers/librte_bus_pci.a 00:03:54.412 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:54.412 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:54.672 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:54.672 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:54.672 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:54.672 [225/268] Linking static target drivers/librte_mempool_ring.a 00:03:54.672 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:54.932 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:55.870 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:56.811 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.069 [230/268] Linking target lib/librte_eal.so.24.1 00:03:57.069 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:57.327 [232/268] Linking target lib/librte_ring.so.24.1 00:03:57.327 [233/268] Linking target lib/librte_meter.so.24.1 00:03:57.328 [234/268] Linking target lib/librte_dmadev.so.24.1 00:03:57.328 [235/268] Linking target lib/librte_timer.so.24.1 00:03:57.328 [236/268] Linking target lib/librte_pci.so.24.1 00:03:57.328 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:57.328 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:57.328 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:57.328 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:57.328 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:57.328 [242/268] Linking target lib/librte_rcu.so.24.1 00:03:57.328 [243/268] Linking target lib/librte_mempool.so.24.1 00:03:57.587 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:57.587 [245/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:57.587 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:57.587 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:57.587 [248/268] Linking target lib/librte_mbuf.so.24.1 00:03:57.587 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:57.846 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:57.846 [251/268] Linking target lib/librte_net.so.24.1 00:03:57.846 [252/268] Linking target lib/librte_reorder.so.24.1 00:03:57.846 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:03:57.846 [254/268] Linking target lib/librte_compressdev.so.24.1 00:03:58.105 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:58.105 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:58.105 [257/268] Linking target lib/librte_cmdline.so.24.1 00:03:58.105 [258/268] Linking target lib/librte_security.so.24.1 00:03:58.105 [259/268] Linking target lib/librte_hash.so.24.1 00:03:58.105 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:59.483 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:59.483 [262/268] Linking target lib/librte_ethdev.so.24.1 00:03:59.741 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:59.741 [264/268] Linking target lib/librte_power.so.24.1 00:04:00.680 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:00.938 [266/268] Linking static target lib/librte_vhost.a 00:04:03.473 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:03.473 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:03.473 INFO: autodetecting backend as ninja 00:04:03.473 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:25.415 CC lib/ut_mock/mock.o 00:04:25.415 CC lib/ut/ut.o 00:04:25.415 CC lib/log/log_deprecated.o 00:04:25.415 CC lib/log/log_flags.o 00:04:25.415 CC lib/log/log.o 00:04:25.415 LIB libspdk_ut.a 00:04:25.415 LIB libspdk_ut_mock.a 00:04:25.415 SO libspdk_ut_mock.so.6.0 00:04:25.415 LIB libspdk_log.a 00:04:25.415 SO libspdk_ut.so.2.0 00:04:25.415 SO libspdk_log.so.7.1 00:04:25.415 SYMLINK libspdk_ut_mock.so 00:04:25.415 SYMLINK libspdk_ut.so 00:04:25.415 SYMLINK libspdk_log.so 00:04:25.415 CC lib/util/base64.o 00:04:25.415 CC lib/util/bit_array.o 00:04:25.415 CC lib/dma/dma.o 00:04:25.415 CC lib/util/crc16.o 00:04:25.415 CC lib/util/crc32.o 00:04:25.415 CC lib/util/cpuset.o 00:04:25.415 CC lib/ioat/ioat.o 00:04:25.415 CC lib/util/crc32c.o 00:04:25.415 CXX lib/trace_parser/trace.o 00:04:25.415 CC lib/vfio_user/host/vfio_user_pci.o 00:04:25.415 CC lib/util/crc32_ieee.o 00:04:25.415 CC lib/util/crc64.o 00:04:25.415 CC lib/util/dif.o 00:04:25.415 CC lib/vfio_user/host/vfio_user.o 00:04:25.415 LIB libspdk_dma.a 00:04:25.415 CC lib/util/fd.o 00:04:25.415 CC lib/util/fd_group.o 00:04:25.415 SO libspdk_dma.so.5.0 00:04:25.415 CC lib/util/file.o 00:04:25.415 CC lib/util/hexlify.o 00:04:25.415 SYMLINK libspdk_dma.so 00:04:25.415 CC lib/util/iov.o 00:04:25.415 LIB libspdk_ioat.a 00:04:25.415 SO libspdk_ioat.so.7.0 00:04:25.415 CC lib/util/math.o 00:04:25.415 CC lib/util/net.o 00:04:25.415 SYMLINK libspdk_ioat.so 00:04:25.415 LIB libspdk_vfio_user.a 00:04:25.415 CC lib/util/pipe.o 00:04:25.415 SO libspdk_vfio_user.so.5.0 00:04:25.415 CC lib/util/strerror_tls.o 00:04:25.415 CC lib/util/string.o 00:04:25.415 SYMLINK libspdk_vfio_user.so 00:04:25.415 CC lib/util/uuid.o 00:04:25.415 CC lib/util/xor.o 00:04:25.415 CC lib/util/zipf.o 00:04:25.675 CC lib/util/md5.o 00:04:25.934 LIB libspdk_util.a 00:04:25.934 SO libspdk_util.so.10.1 00:04:26.194 LIB libspdk_trace_parser.a 00:04:26.194 SYMLINK libspdk_util.so 00:04:26.194 SO libspdk_trace_parser.so.6.0 00:04:26.453 SYMLINK libspdk_trace_parser.so 00:04:26.453 CC lib/vmd/vmd.o 00:04:26.453 CC lib/vmd/led.o 00:04:26.453 CC lib/json/json_parse.o 00:04:26.453 CC lib/json/json_util.o 00:04:26.453 CC lib/conf/conf.o 00:04:26.453 CC lib/json/json_write.o 00:04:26.453 CC lib/idxd/idxd.o 00:04:26.453 CC lib/env_dpdk/env.o 00:04:26.453 CC lib/rdma_utils/rdma_utils.o 00:04:26.453 CC lib/rdma_provider/common.o 00:04:26.711 CC lib/env_dpdk/memory.o 00:04:26.711 LIB libspdk_conf.a 00:04:26.711 CC lib/env_dpdk/pci.o 00:04:26.711 CC lib/env_dpdk/init.o 00:04:26.711 SO libspdk_conf.so.6.0 00:04:26.711 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:26.711 LIB libspdk_json.a 00:04:26.711 SYMLINK libspdk_conf.so 00:04:26.711 CC lib/idxd/idxd_user.o 00:04:26.711 SO libspdk_json.so.6.0 00:04:26.711 LIB libspdk_rdma_utils.a 00:04:26.711 SO libspdk_rdma_utils.so.1.0 00:04:26.972 SYMLINK libspdk_json.so 00:04:26.972 CC lib/idxd/idxd_kernel.o 00:04:26.972 SYMLINK libspdk_rdma_utils.so 00:04:26.972 LIB libspdk_rdma_provider.a 00:04:26.972 SO libspdk_rdma_provider.so.6.0 00:04:26.972 SYMLINK libspdk_rdma_provider.so 00:04:26.972 CC lib/env_dpdk/threads.o 00:04:26.972 CC lib/env_dpdk/pci_ioat.o 00:04:26.972 CC lib/env_dpdk/pci_virtio.o 00:04:27.230 CC lib/env_dpdk/pci_vmd.o 00:04:27.230 CC lib/jsonrpc/jsonrpc_server.o 00:04:27.230 CC lib/env_dpdk/pci_idxd.o 00:04:27.230 CC lib/env_dpdk/pci_event.o 00:04:27.230 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:27.230 CC lib/env_dpdk/sigbus_handler.o 00:04:27.230 LIB libspdk_idxd.a 00:04:27.230 CC lib/env_dpdk/pci_dpdk.o 00:04:27.230 LIB libspdk_vmd.a 00:04:27.230 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:27.230 SO libspdk_idxd.so.12.1 00:04:27.230 SO libspdk_vmd.so.6.0 00:04:27.490 CC lib/jsonrpc/jsonrpc_client.o 00:04:27.490 SYMLINK libspdk_vmd.so 00:04:27.490 SYMLINK libspdk_idxd.so 00:04:27.490 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:27.490 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:27.750 LIB libspdk_jsonrpc.a 00:04:27.750 SO libspdk_jsonrpc.so.6.0 00:04:27.750 SYMLINK libspdk_jsonrpc.so 00:04:28.320 CC lib/rpc/rpc.o 00:04:28.320 LIB libspdk_env_dpdk.a 00:04:28.320 SO libspdk_env_dpdk.so.15.1 00:04:28.580 LIB libspdk_rpc.a 00:04:28.580 SO libspdk_rpc.so.6.0 00:04:28.580 SYMLINK libspdk_env_dpdk.so 00:04:28.580 SYMLINK libspdk_rpc.so 00:04:29.149 CC lib/trace/trace.o 00:04:29.149 CC lib/trace/trace_rpc.o 00:04:29.149 CC lib/trace/trace_flags.o 00:04:29.149 CC lib/notify/notify.o 00:04:29.149 CC lib/notify/notify_rpc.o 00:04:29.149 CC lib/keyring/keyring.o 00:04:29.149 CC lib/keyring/keyring_rpc.o 00:04:29.408 LIB libspdk_notify.a 00:04:29.408 SO libspdk_notify.so.6.0 00:04:29.408 LIB libspdk_keyring.a 00:04:29.408 LIB libspdk_trace.a 00:04:29.408 SYMLINK libspdk_notify.so 00:04:29.408 SO libspdk_keyring.so.2.0 00:04:29.408 SO libspdk_trace.so.11.0 00:04:29.408 SYMLINK libspdk_keyring.so 00:04:29.408 SYMLINK libspdk_trace.so 00:04:29.977 CC lib/sock/sock.o 00:04:29.977 CC lib/thread/thread.o 00:04:29.977 CC lib/thread/iobuf.o 00:04:29.977 CC lib/sock/sock_rpc.o 00:04:30.544 LIB libspdk_sock.a 00:04:30.544 SO libspdk_sock.so.10.0 00:04:30.544 SYMLINK libspdk_sock.so 00:04:31.112 CC lib/nvme/nvme_ctrlr.o 00:04:31.112 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:31.112 CC lib/nvme/nvme_fabric.o 00:04:31.112 CC lib/nvme/nvme_ns_cmd.o 00:04:31.112 CC lib/nvme/nvme_ns.o 00:04:31.112 CC lib/nvme/nvme_pcie.o 00:04:31.112 CC lib/nvme/nvme_qpair.o 00:04:31.112 CC lib/nvme/nvme_pcie_common.o 00:04:31.112 CC lib/nvme/nvme.o 00:04:31.744 CC lib/nvme/nvme_quirks.o 00:04:31.744 LIB libspdk_thread.a 00:04:31.744 CC lib/nvme/nvme_transport.o 00:04:31.744 CC lib/nvme/nvme_discovery.o 00:04:31.744 SO libspdk_thread.so.11.0 00:04:32.003 SYMLINK libspdk_thread.so 00:04:32.003 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:32.003 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:32.003 CC lib/nvme/nvme_tcp.o 00:04:32.003 CC lib/nvme/nvme_opal.o 00:04:32.003 CC lib/nvme/nvme_io_msg.o 00:04:32.262 CC lib/nvme/nvme_poll_group.o 00:04:32.262 CC lib/nvme/nvme_zns.o 00:04:32.521 CC lib/nvme/nvme_stubs.o 00:04:32.521 CC lib/nvme/nvme_auth.o 00:04:32.780 CC lib/accel/accel.o 00:04:32.780 CC lib/blob/blobstore.o 00:04:32.780 CC lib/init/json_config.o 00:04:32.780 CC lib/virtio/virtio.o 00:04:33.039 CC lib/accel/accel_rpc.o 00:04:33.039 CC lib/nvme/nvme_cuse.o 00:04:33.039 CC lib/init/subsystem.o 00:04:33.297 CC lib/accel/accel_sw.o 00:04:33.297 CC lib/fsdev/fsdev.o 00:04:33.297 CC lib/virtio/virtio_vhost_user.o 00:04:33.297 CC lib/init/subsystem_rpc.o 00:04:33.556 CC lib/init/rpc.o 00:04:33.556 CC lib/virtio/virtio_vfio_user.o 00:04:33.556 LIB libspdk_init.a 00:04:33.815 CC lib/blob/request.o 00:04:33.815 CC lib/blob/zeroes.o 00:04:33.815 SO libspdk_init.so.6.0 00:04:33.815 CC lib/nvme/nvme_rdma.o 00:04:33.815 SYMLINK libspdk_init.so 00:04:33.815 CC lib/blob/blob_bs_dev.o 00:04:33.815 CC lib/virtio/virtio_pci.o 00:04:33.815 CC lib/fsdev/fsdev_io.o 00:04:34.073 CC lib/event/app.o 00:04:34.073 CC lib/event/reactor.o 00:04:34.073 CC lib/event/log_rpc.o 00:04:34.073 CC lib/fsdev/fsdev_rpc.o 00:04:34.073 CC lib/event/app_rpc.o 00:04:34.073 LIB libspdk_accel.a 00:04:34.073 SO libspdk_accel.so.16.0 00:04:34.331 CC lib/event/scheduler_static.o 00:04:34.331 LIB libspdk_virtio.a 00:04:34.331 SYMLINK libspdk_accel.so 00:04:34.331 SO libspdk_virtio.so.7.0 00:04:34.331 LIB libspdk_fsdev.a 00:04:34.331 SO libspdk_fsdev.so.2.0 00:04:34.331 SYMLINK libspdk_virtio.so 00:04:34.331 SYMLINK libspdk_fsdev.so 00:04:34.589 CC lib/bdev/bdev.o 00:04:34.589 CC lib/bdev/part.o 00:04:34.589 CC lib/bdev/bdev_rpc.o 00:04:34.589 CC lib/bdev/bdev_zone.o 00:04:34.589 CC lib/bdev/scsi_nvme.o 00:04:34.589 LIB libspdk_event.a 00:04:34.589 SO libspdk_event.so.14.0 00:04:34.589 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:34.848 SYMLINK libspdk_event.so 00:04:35.414 LIB libspdk_nvme.a 00:04:35.414 LIB libspdk_fuse_dispatcher.a 00:04:35.673 SO libspdk_fuse_dispatcher.so.1.0 00:04:35.674 SO libspdk_nvme.so.15.0 00:04:35.674 SYMLINK libspdk_fuse_dispatcher.so 00:04:35.933 SYMLINK libspdk_nvme.so 00:04:36.870 LIB libspdk_blob.a 00:04:36.870 SO libspdk_blob.so.11.0 00:04:37.128 SYMLINK libspdk_blob.so 00:04:37.386 CC lib/blobfs/tree.o 00:04:37.386 CC lib/blobfs/blobfs.o 00:04:37.386 CC lib/lvol/lvol.o 00:04:37.953 LIB libspdk_bdev.a 00:04:37.953 SO libspdk_bdev.so.17.0 00:04:38.210 SYMLINK libspdk_bdev.so 00:04:38.467 CC lib/ftl/ftl_core.o 00:04:38.467 CC lib/ftl/ftl_layout.o 00:04:38.467 CC lib/ftl/ftl_debug.o 00:04:38.467 CC lib/nvmf/ctrlr.o 00:04:38.467 CC lib/ftl/ftl_init.o 00:04:38.467 CC lib/nbd/nbd.o 00:04:38.468 CC lib/scsi/dev.o 00:04:38.468 CC lib/ublk/ublk.o 00:04:38.468 LIB libspdk_blobfs.a 00:04:38.468 SO libspdk_blobfs.so.10.0 00:04:38.734 CC lib/scsi/lun.o 00:04:38.734 SYMLINK libspdk_blobfs.so 00:04:38.734 CC lib/scsi/port.o 00:04:38.734 CC lib/ublk/ublk_rpc.o 00:04:38.734 CC lib/nvmf/ctrlr_discovery.o 00:04:38.734 LIB libspdk_lvol.a 00:04:38.734 SO libspdk_lvol.so.10.0 00:04:38.734 CC lib/nvmf/ctrlr_bdev.o 00:04:38.734 SYMLINK libspdk_lvol.so 00:04:38.734 CC lib/nvmf/subsystem.o 00:04:38.734 CC lib/scsi/scsi.o 00:04:38.734 CC lib/scsi/scsi_bdev.o 00:04:39.007 CC lib/ftl/ftl_io.o 00:04:39.007 CC lib/nbd/nbd_rpc.o 00:04:39.007 CC lib/scsi/scsi_pr.o 00:04:39.007 CC lib/scsi/scsi_rpc.o 00:04:39.007 LIB libspdk_nbd.a 00:04:39.266 SO libspdk_nbd.so.7.0 00:04:39.266 CC lib/scsi/task.o 00:04:39.266 SYMLINK libspdk_nbd.so 00:04:39.266 CC lib/nvmf/nvmf.o 00:04:39.266 CC lib/ftl/ftl_sb.o 00:04:39.266 LIB libspdk_ublk.a 00:04:39.266 CC lib/nvmf/nvmf_rpc.o 00:04:39.266 SO libspdk_ublk.so.3.0 00:04:39.266 SYMLINK libspdk_ublk.so 00:04:39.266 CC lib/ftl/ftl_l2p.o 00:04:39.524 CC lib/nvmf/transport.o 00:04:39.524 CC lib/nvmf/tcp.o 00:04:39.524 CC lib/nvmf/stubs.o 00:04:39.524 LIB libspdk_scsi.a 00:04:39.524 CC lib/ftl/ftl_l2p_flat.o 00:04:39.524 SO libspdk_scsi.so.9.0 00:04:39.783 CC lib/nvmf/mdns_server.o 00:04:39.783 SYMLINK libspdk_scsi.so 00:04:39.783 CC lib/nvmf/rdma.o 00:04:39.783 CC lib/ftl/ftl_nv_cache.o 00:04:40.041 CC lib/nvmf/auth.o 00:04:40.300 CC lib/ftl/ftl_band.o 00:04:40.300 CC lib/ftl/ftl_band_ops.o 00:04:40.300 CC lib/iscsi/conn.o 00:04:40.300 CC lib/iscsi/init_grp.o 00:04:40.558 CC lib/iscsi/iscsi.o 00:04:40.817 CC lib/iscsi/param.o 00:04:40.817 CC lib/iscsi/portal_grp.o 00:04:40.817 CC lib/iscsi/tgt_node.o 00:04:40.817 CC lib/iscsi/iscsi_subsystem.o 00:04:41.076 CC lib/ftl/ftl_writer.o 00:04:41.076 CC lib/iscsi/iscsi_rpc.o 00:04:41.076 CC lib/ftl/ftl_rq.o 00:04:41.076 CC lib/iscsi/task.o 00:04:41.076 CC lib/ftl/ftl_reloc.o 00:04:41.337 CC lib/ftl/ftl_l2p_cache.o 00:04:41.337 CC lib/ftl/ftl_p2l.o 00:04:41.337 CC lib/ftl/ftl_p2l_log.o 00:04:41.337 CC lib/ftl/mngt/ftl_mngt.o 00:04:41.337 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:41.337 CC lib/vhost/vhost.o 00:04:41.595 CC lib/vhost/vhost_rpc.o 00:04:41.595 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:41.595 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:41.854 CC lib/vhost/vhost_scsi.o 00:04:41.854 CC lib/vhost/vhost_blk.o 00:04:41.854 CC lib/vhost/rte_vhost_user.o 00:04:41.854 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:41.854 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:41.854 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:42.114 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:42.114 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:42.114 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:42.114 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:42.373 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:42.373 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:42.373 CC lib/ftl/utils/ftl_conf.o 00:04:42.373 LIB libspdk_iscsi.a 00:04:42.373 CC lib/ftl/utils/ftl_md.o 00:04:42.373 CC lib/ftl/utils/ftl_mempool.o 00:04:42.697 SO libspdk_iscsi.so.8.0 00:04:42.697 CC lib/ftl/utils/ftl_bitmap.o 00:04:42.697 CC lib/ftl/utils/ftl_property.o 00:04:42.697 SYMLINK libspdk_iscsi.so 00:04:42.697 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:42.697 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:42.697 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:42.697 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:42.697 LIB libspdk_nvmf.a 00:04:42.697 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:42.697 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:42.956 LIB libspdk_vhost.a 00:04:42.956 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:42.956 SO libspdk_nvmf.so.20.0 00:04:42.956 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:42.956 SO libspdk_vhost.so.8.0 00:04:42.956 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:42.956 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:42.956 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:42.956 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:42.956 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:42.956 CC lib/ftl/base/ftl_base_dev.o 00:04:42.956 SYMLINK libspdk_vhost.so 00:04:42.956 CC lib/ftl/base/ftl_base_bdev.o 00:04:43.215 CC lib/ftl/ftl_trace.o 00:04:43.215 SYMLINK libspdk_nvmf.so 00:04:43.474 LIB libspdk_ftl.a 00:04:43.734 SO libspdk_ftl.so.9.0 00:04:43.993 SYMLINK libspdk_ftl.so 00:04:44.252 CC module/env_dpdk/env_dpdk_rpc.o 00:04:44.511 CC module/accel/dsa/accel_dsa.o 00:04:44.511 CC module/fsdev/aio/fsdev_aio.o 00:04:44.511 CC module/keyring/file/keyring.o 00:04:44.511 CC module/accel/ioat/accel_ioat.o 00:04:44.511 CC module/sock/posix/posix.o 00:04:44.511 CC module/blob/bdev/blob_bdev.o 00:04:44.511 CC module/accel/error/accel_error.o 00:04:44.511 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:44.511 CC module/keyring/linux/keyring.o 00:04:44.511 LIB libspdk_env_dpdk_rpc.a 00:04:44.511 SO libspdk_env_dpdk_rpc.so.6.0 00:04:44.511 SYMLINK libspdk_env_dpdk_rpc.so 00:04:44.511 CC module/keyring/linux/keyring_rpc.o 00:04:44.511 CC module/keyring/file/keyring_rpc.o 00:04:44.769 CC module/accel/ioat/accel_ioat_rpc.o 00:04:44.769 LIB libspdk_scheduler_dynamic.a 00:04:44.769 CC module/accel/error/accel_error_rpc.o 00:04:44.769 SO libspdk_scheduler_dynamic.so.4.0 00:04:44.769 LIB libspdk_keyring_linux.a 00:04:44.769 SO libspdk_keyring_linux.so.1.0 00:04:44.769 LIB libspdk_keyring_file.a 00:04:44.769 CC module/accel/dsa/accel_dsa_rpc.o 00:04:44.769 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:44.769 SO libspdk_keyring_file.so.2.0 00:04:44.769 LIB libspdk_blob_bdev.a 00:04:44.769 SYMLINK libspdk_scheduler_dynamic.so 00:04:44.769 SO libspdk_blob_bdev.so.11.0 00:04:44.769 SYMLINK libspdk_keyring_linux.so 00:04:44.769 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:44.769 SYMLINK libspdk_keyring_file.so 00:04:44.769 CC module/fsdev/aio/linux_aio_mgr.o 00:04:44.769 LIB libspdk_accel_ioat.a 00:04:44.769 LIB libspdk_accel_error.a 00:04:44.769 SYMLINK libspdk_blob_bdev.so 00:04:45.027 SO libspdk_accel_ioat.so.6.0 00:04:45.027 SO libspdk_accel_error.so.2.0 00:04:45.027 LIB libspdk_accel_dsa.a 00:04:45.027 LIB libspdk_scheduler_dpdk_governor.a 00:04:45.027 SO libspdk_accel_dsa.so.5.0 00:04:45.027 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:45.027 SYMLINK libspdk_accel_ioat.so 00:04:45.027 SYMLINK libspdk_accel_error.so 00:04:45.027 CC module/scheduler/gscheduler/gscheduler.o 00:04:45.027 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:45.027 SYMLINK libspdk_accel_dsa.so 00:04:45.027 CC module/bdev/delay/vbdev_delay.o 00:04:45.286 CC module/bdev/error/vbdev_error.o 00:04:45.286 CC module/bdev/gpt/gpt.o 00:04:45.286 CC module/bdev/lvol/vbdev_lvol.o 00:04:45.286 LIB libspdk_scheduler_gscheduler.a 00:04:45.286 CC module/accel/iaa/accel_iaa.o 00:04:45.286 SO libspdk_scheduler_gscheduler.so.4.0 00:04:45.286 CC module/bdev/malloc/bdev_malloc.o 00:04:45.286 CC module/blobfs/bdev/blobfs_bdev.o 00:04:45.286 LIB libspdk_fsdev_aio.a 00:04:45.286 SYMLINK libspdk_scheduler_gscheduler.so 00:04:45.286 CC module/bdev/gpt/vbdev_gpt.o 00:04:45.286 SO libspdk_fsdev_aio.so.1.0 00:04:45.286 LIB libspdk_sock_posix.a 00:04:45.286 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:45.286 SO libspdk_sock_posix.so.6.0 00:04:45.544 SYMLINK libspdk_fsdev_aio.so 00:04:45.544 CC module/accel/iaa/accel_iaa_rpc.o 00:04:45.544 CC module/bdev/error/vbdev_error_rpc.o 00:04:45.544 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:45.544 SYMLINK libspdk_sock_posix.so 00:04:45.544 LIB libspdk_accel_iaa.a 00:04:45.544 LIB libspdk_blobfs_bdev.a 00:04:45.544 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:45.544 SO libspdk_accel_iaa.so.3.0 00:04:45.544 SO libspdk_blobfs_bdev.so.6.0 00:04:45.544 LIB libspdk_bdev_gpt.a 00:04:45.544 CC module/bdev/null/bdev_null.o 00:04:45.544 CC module/bdev/nvme/bdev_nvme.o 00:04:45.544 LIB libspdk_bdev_error.a 00:04:45.544 SO libspdk_bdev_gpt.so.6.0 00:04:45.803 SYMLINK libspdk_accel_iaa.so 00:04:45.803 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:45.803 SO libspdk_bdev_error.so.6.0 00:04:45.803 SYMLINK libspdk_blobfs_bdev.so 00:04:45.803 CC module/bdev/null/bdev_null_rpc.o 00:04:45.803 SYMLINK libspdk_bdev_gpt.so 00:04:45.803 SYMLINK libspdk_bdev_error.so 00:04:45.803 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:45.803 LIB libspdk_bdev_delay.a 00:04:45.803 SO libspdk_bdev_delay.so.6.0 00:04:45.803 LIB libspdk_bdev_malloc.a 00:04:45.803 CC module/bdev/passthru/vbdev_passthru.o 00:04:45.803 SO libspdk_bdev_malloc.so.6.0 00:04:45.803 LIB libspdk_bdev_lvol.a 00:04:45.803 CC module/bdev/raid/bdev_raid.o 00:04:45.803 SYMLINK libspdk_bdev_delay.so 00:04:45.803 CC module/bdev/raid/bdev_raid_rpc.o 00:04:46.060 CC module/bdev/split/vbdev_split.o 00:04:46.060 SO libspdk_bdev_lvol.so.6.0 00:04:46.060 LIB libspdk_bdev_null.a 00:04:46.060 SO libspdk_bdev_null.so.6.0 00:04:46.060 SYMLINK libspdk_bdev_malloc.so 00:04:46.060 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:46.060 SYMLINK libspdk_bdev_lvol.so 00:04:46.060 CC module/bdev/nvme/nvme_rpc.o 00:04:46.060 SYMLINK libspdk_bdev_null.so 00:04:46.060 CC module/bdev/nvme/bdev_mdns_client.o 00:04:46.060 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:46.060 CC module/bdev/split/vbdev_split_rpc.o 00:04:46.318 CC module/bdev/nvme/vbdev_opal.o 00:04:46.318 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:46.318 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:46.318 LIB libspdk_bdev_passthru.a 00:04:46.318 SO libspdk_bdev_passthru.so.6.0 00:04:46.318 CC module/bdev/raid/bdev_raid_sb.o 00:04:46.318 LIB libspdk_bdev_split.a 00:04:46.318 SYMLINK libspdk_bdev_passthru.so 00:04:46.318 SO libspdk_bdev_split.so.6.0 00:04:46.576 CC module/bdev/raid/raid0.o 00:04:46.576 LIB libspdk_bdev_zone_block.a 00:04:46.576 CC module/bdev/raid/raid1.o 00:04:46.576 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:46.576 SYMLINK libspdk_bdev_split.so 00:04:46.576 SO libspdk_bdev_zone_block.so.6.0 00:04:46.576 SYMLINK libspdk_bdev_zone_block.so 00:04:46.576 CC module/bdev/aio/bdev_aio.o 00:04:46.576 CC module/bdev/raid/concat.o 00:04:46.576 CC module/bdev/aio/bdev_aio_rpc.o 00:04:46.834 CC module/bdev/ftl/bdev_ftl.o 00:04:46.834 CC module/bdev/raid/raid5f.o 00:04:46.834 CC module/bdev/iscsi/bdev_iscsi.o 00:04:46.834 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:46.834 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:46.834 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:47.092 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:47.092 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:47.092 LIB libspdk_bdev_aio.a 00:04:47.092 SO libspdk_bdev_aio.so.6.0 00:04:47.092 LIB libspdk_bdev_ftl.a 00:04:47.092 SYMLINK libspdk_bdev_aio.so 00:04:47.092 SO libspdk_bdev_ftl.so.6.0 00:04:47.351 SYMLINK libspdk_bdev_ftl.so 00:04:47.351 LIB libspdk_bdev_iscsi.a 00:04:47.351 SO libspdk_bdev_iscsi.so.6.0 00:04:47.351 LIB libspdk_bdev_raid.a 00:04:47.351 SYMLINK libspdk_bdev_iscsi.so 00:04:47.351 LIB libspdk_bdev_virtio.a 00:04:47.351 SO libspdk_bdev_raid.so.6.0 00:04:47.611 SO libspdk_bdev_virtio.so.6.0 00:04:47.612 SYMLINK libspdk_bdev_raid.so 00:04:47.612 SYMLINK libspdk_bdev_virtio.so 00:04:49.513 LIB libspdk_bdev_nvme.a 00:04:49.513 SO libspdk_bdev_nvme.so.7.1 00:04:49.513 SYMLINK libspdk_bdev_nvme.so 00:04:50.081 CC module/event/subsystems/iobuf/iobuf.o 00:04:50.081 CC module/event/subsystems/scheduler/scheduler.o 00:04:50.081 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:50.081 CC module/event/subsystems/vmd/vmd.o 00:04:50.081 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:50.081 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:50.081 CC module/event/subsystems/sock/sock.o 00:04:50.081 CC module/event/subsystems/fsdev/fsdev.o 00:04:50.081 CC module/event/subsystems/keyring/keyring.o 00:04:50.081 LIB libspdk_event_fsdev.a 00:04:50.081 LIB libspdk_event_scheduler.a 00:04:50.081 LIB libspdk_event_vmd.a 00:04:50.081 LIB libspdk_event_sock.a 00:04:50.081 LIB libspdk_event_vhost_blk.a 00:04:50.081 LIB libspdk_event_keyring.a 00:04:50.081 SO libspdk_event_fsdev.so.1.0 00:04:50.081 SO libspdk_event_scheduler.so.4.0 00:04:50.340 SO libspdk_event_sock.so.5.0 00:04:50.340 SO libspdk_event_vhost_blk.so.3.0 00:04:50.340 SO libspdk_event_vmd.so.6.0 00:04:50.340 SO libspdk_event_keyring.so.1.0 00:04:50.340 SYMLINK libspdk_event_fsdev.so 00:04:50.340 SYMLINK libspdk_event_scheduler.so 00:04:50.340 LIB libspdk_event_iobuf.a 00:04:50.340 SYMLINK libspdk_event_sock.so 00:04:50.340 SYMLINK libspdk_event_vhost_blk.so 00:04:50.340 SYMLINK libspdk_event_keyring.so 00:04:50.340 SYMLINK libspdk_event_vmd.so 00:04:50.340 SO libspdk_event_iobuf.so.3.0 00:04:50.340 SYMLINK libspdk_event_iobuf.so 00:04:50.908 CC module/event/subsystems/accel/accel.o 00:04:50.908 LIB libspdk_event_accel.a 00:04:50.908 SO libspdk_event_accel.so.6.0 00:04:51.168 SYMLINK libspdk_event_accel.so 00:04:51.427 CC module/event/subsystems/bdev/bdev.o 00:04:51.686 LIB libspdk_event_bdev.a 00:04:51.686 SO libspdk_event_bdev.so.6.0 00:04:51.686 SYMLINK libspdk_event_bdev.so 00:04:52.255 CC module/event/subsystems/scsi/scsi.o 00:04:52.255 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:52.255 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:52.255 CC module/event/subsystems/ublk/ublk.o 00:04:52.255 CC module/event/subsystems/nbd/nbd.o 00:04:52.255 LIB libspdk_event_scsi.a 00:04:52.255 LIB libspdk_event_ublk.a 00:04:52.255 LIB libspdk_event_nbd.a 00:04:52.255 SO libspdk_event_scsi.so.6.0 00:04:52.255 SO libspdk_event_ublk.so.3.0 00:04:52.255 SO libspdk_event_nbd.so.6.0 00:04:52.515 SYMLINK libspdk_event_ublk.so 00:04:52.515 SYMLINK libspdk_event_scsi.so 00:04:52.515 SYMLINK libspdk_event_nbd.so 00:04:52.515 LIB libspdk_event_nvmf.a 00:04:52.515 SO libspdk_event_nvmf.so.6.0 00:04:52.515 SYMLINK libspdk_event_nvmf.so 00:04:52.774 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:52.774 CC module/event/subsystems/iscsi/iscsi.o 00:04:53.033 LIB libspdk_event_vhost_scsi.a 00:04:53.033 SO libspdk_event_vhost_scsi.so.3.0 00:04:53.033 LIB libspdk_event_iscsi.a 00:04:53.033 SO libspdk_event_iscsi.so.6.0 00:04:53.033 SYMLINK libspdk_event_vhost_scsi.so 00:04:53.033 SYMLINK libspdk_event_iscsi.so 00:04:53.293 SO libspdk.so.6.0 00:04:53.293 SYMLINK libspdk.so 00:04:53.553 TEST_HEADER include/spdk/accel.h 00:04:53.553 CC app/trace_record/trace_record.o 00:04:53.553 TEST_HEADER include/spdk/accel_module.h 00:04:53.553 CXX app/trace/trace.o 00:04:53.553 TEST_HEADER include/spdk/assert.h 00:04:53.553 TEST_HEADER include/spdk/barrier.h 00:04:53.553 TEST_HEADER include/spdk/base64.h 00:04:53.812 TEST_HEADER include/spdk/bdev.h 00:04:53.812 TEST_HEADER include/spdk/bdev_module.h 00:04:53.812 TEST_HEADER include/spdk/bdev_zone.h 00:04:53.812 TEST_HEADER include/spdk/bit_array.h 00:04:53.812 TEST_HEADER include/spdk/bit_pool.h 00:04:53.812 TEST_HEADER include/spdk/blob_bdev.h 00:04:53.812 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:53.812 TEST_HEADER include/spdk/blobfs.h 00:04:53.812 TEST_HEADER include/spdk/blob.h 00:04:53.812 TEST_HEADER include/spdk/conf.h 00:04:53.812 TEST_HEADER include/spdk/config.h 00:04:53.812 TEST_HEADER include/spdk/cpuset.h 00:04:53.812 TEST_HEADER include/spdk/crc16.h 00:04:53.812 TEST_HEADER include/spdk/crc32.h 00:04:53.812 TEST_HEADER include/spdk/crc64.h 00:04:53.812 TEST_HEADER include/spdk/dif.h 00:04:53.812 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:53.812 TEST_HEADER include/spdk/dma.h 00:04:53.812 TEST_HEADER include/spdk/endian.h 00:04:53.812 TEST_HEADER include/spdk/env_dpdk.h 00:04:53.812 TEST_HEADER include/spdk/env.h 00:04:53.812 TEST_HEADER include/spdk/event.h 00:04:53.812 TEST_HEADER include/spdk/fd_group.h 00:04:53.812 TEST_HEADER include/spdk/fd.h 00:04:53.812 TEST_HEADER include/spdk/file.h 00:04:53.812 CC examples/ioat/perf/perf.o 00:04:53.812 TEST_HEADER include/spdk/fsdev.h 00:04:53.812 TEST_HEADER include/spdk/fsdev_module.h 00:04:53.812 TEST_HEADER include/spdk/ftl.h 00:04:53.812 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:53.812 CC app/nvmf_tgt/nvmf_main.o 00:04:53.812 TEST_HEADER include/spdk/gpt_spec.h 00:04:53.812 TEST_HEADER include/spdk/hexlify.h 00:04:53.812 TEST_HEADER include/spdk/histogram_data.h 00:04:53.812 TEST_HEADER include/spdk/idxd.h 00:04:53.812 TEST_HEADER include/spdk/idxd_spec.h 00:04:53.812 TEST_HEADER include/spdk/init.h 00:04:53.812 TEST_HEADER include/spdk/ioat.h 00:04:53.812 TEST_HEADER include/spdk/ioat_spec.h 00:04:53.812 CC examples/util/zipf/zipf.o 00:04:53.812 TEST_HEADER include/spdk/iscsi_spec.h 00:04:53.812 TEST_HEADER include/spdk/json.h 00:04:53.812 TEST_HEADER include/spdk/jsonrpc.h 00:04:53.812 TEST_HEADER include/spdk/keyring.h 00:04:53.812 TEST_HEADER include/spdk/keyring_module.h 00:04:53.812 TEST_HEADER include/spdk/likely.h 00:04:53.812 TEST_HEADER include/spdk/log.h 00:04:53.812 TEST_HEADER include/spdk/lvol.h 00:04:53.812 TEST_HEADER include/spdk/md5.h 00:04:53.812 TEST_HEADER include/spdk/memory.h 00:04:53.812 TEST_HEADER include/spdk/mmio.h 00:04:53.812 TEST_HEADER include/spdk/nbd.h 00:04:53.812 TEST_HEADER include/spdk/net.h 00:04:53.812 TEST_HEADER include/spdk/notify.h 00:04:53.812 TEST_HEADER include/spdk/nvme.h 00:04:53.812 CC test/thread/poller_perf/poller_perf.o 00:04:53.812 TEST_HEADER include/spdk/nvme_intel.h 00:04:53.812 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:53.812 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:53.812 TEST_HEADER include/spdk/nvme_spec.h 00:04:53.812 TEST_HEADER include/spdk/nvme_zns.h 00:04:53.812 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:53.813 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:53.813 TEST_HEADER include/spdk/nvmf.h 00:04:53.813 TEST_HEADER include/spdk/nvmf_spec.h 00:04:53.813 CC test/dma/test_dma/test_dma.o 00:04:53.813 TEST_HEADER include/spdk/nvmf_transport.h 00:04:53.813 TEST_HEADER include/spdk/opal.h 00:04:53.813 TEST_HEADER include/spdk/opal_spec.h 00:04:53.813 TEST_HEADER include/spdk/pci_ids.h 00:04:53.813 TEST_HEADER include/spdk/pipe.h 00:04:53.813 TEST_HEADER include/spdk/queue.h 00:04:53.813 TEST_HEADER include/spdk/reduce.h 00:04:53.813 CC test/app/bdev_svc/bdev_svc.o 00:04:53.813 TEST_HEADER include/spdk/rpc.h 00:04:53.813 TEST_HEADER include/spdk/scheduler.h 00:04:53.813 TEST_HEADER include/spdk/scsi.h 00:04:53.813 TEST_HEADER include/spdk/scsi_spec.h 00:04:53.813 TEST_HEADER include/spdk/sock.h 00:04:53.813 TEST_HEADER include/spdk/stdinc.h 00:04:53.813 TEST_HEADER include/spdk/string.h 00:04:53.813 TEST_HEADER include/spdk/thread.h 00:04:53.813 TEST_HEADER include/spdk/trace.h 00:04:53.813 TEST_HEADER include/spdk/trace_parser.h 00:04:53.813 TEST_HEADER include/spdk/tree.h 00:04:53.813 TEST_HEADER include/spdk/ublk.h 00:04:53.813 TEST_HEADER include/spdk/util.h 00:04:53.813 TEST_HEADER include/spdk/uuid.h 00:04:53.813 TEST_HEADER include/spdk/version.h 00:04:53.813 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:53.813 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:53.813 TEST_HEADER include/spdk/vhost.h 00:04:53.813 TEST_HEADER include/spdk/vmd.h 00:04:53.813 TEST_HEADER include/spdk/xor.h 00:04:53.813 TEST_HEADER include/spdk/zipf.h 00:04:53.813 CXX test/cpp_headers/accel.o 00:04:53.813 LINK interrupt_tgt 00:04:54.072 LINK zipf 00:04:54.072 LINK poller_perf 00:04:54.072 LINK nvmf_tgt 00:04:54.072 LINK spdk_trace_record 00:04:54.072 LINK ioat_perf 00:04:54.072 LINK bdev_svc 00:04:54.072 CXX test/cpp_headers/accel_module.o 00:04:54.072 LINK spdk_trace 00:04:54.332 CC examples/ioat/verify/verify.o 00:04:54.332 CXX test/cpp_headers/assert.o 00:04:54.332 CC examples/thread/thread/thread_ex.o 00:04:54.332 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:54.332 CC test/env/vtophys/vtophys.o 00:04:54.332 CC examples/vmd/lsvmd/lsvmd.o 00:04:54.332 CC examples/sock/hello_world/hello_sock.o 00:04:54.332 CC test/env/mem_callbacks/mem_callbacks.o 00:04:54.332 LINK test_dma 00:04:54.592 LINK verify 00:04:54.592 CXX test/cpp_headers/barrier.o 00:04:54.592 CC app/iscsi_tgt/iscsi_tgt.o 00:04:54.592 LINK vtophys 00:04:54.592 LINK lsvmd 00:04:54.592 LINK thread 00:04:54.592 CXX test/cpp_headers/base64.o 00:04:54.592 LINK hello_sock 00:04:54.851 LINK iscsi_tgt 00:04:54.851 CC examples/vmd/led/led.o 00:04:54.851 CC test/app/histogram_perf/histogram_perf.o 00:04:54.851 CC test/app/jsoncat/jsoncat.o 00:04:54.851 CXX test/cpp_headers/bdev.o 00:04:54.851 CC test/app/stub/stub.o 00:04:54.851 LINK nvme_fuzz 00:04:54.851 LINK led 00:04:54.851 LINK histogram_perf 00:04:54.851 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:54.851 CC test/rpc_client/rpc_client_test.o 00:04:54.851 LINK jsoncat 00:04:55.111 CXX test/cpp_headers/bdev_module.o 00:04:55.111 LINK stub 00:04:55.111 LINK mem_callbacks 00:04:55.111 LINK env_dpdk_post_init 00:04:55.111 CC app/spdk_tgt/spdk_tgt.o 00:04:55.111 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:55.111 LINK rpc_client_test 00:04:55.111 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:55.111 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:55.111 CC test/env/memory/memory_ut.o 00:04:55.111 CXX test/cpp_headers/bdev_zone.o 00:04:55.370 CC examples/idxd/perf/perf.o 00:04:55.370 CC test/env/pci/pci_ut.o 00:04:55.370 CXX test/cpp_headers/bit_array.o 00:04:55.370 LINK spdk_tgt 00:04:55.370 CC app/spdk_lspci/spdk_lspci.o 00:04:55.370 CC app/spdk_nvme_perf/perf.o 00:04:55.629 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:55.629 CXX test/cpp_headers/bit_pool.o 00:04:55.629 LINK spdk_lspci 00:04:55.629 CXX test/cpp_headers/blob_bdev.o 00:04:55.629 LINK idxd_perf 00:04:55.629 LINK vhost_fuzz 00:04:55.889 LINK pci_ut 00:04:55.889 CXX test/cpp_headers/blobfs_bdev.o 00:04:55.889 LINK hello_fsdev 00:04:55.889 CC app/spdk_nvme_identify/identify.o 00:04:55.889 CC examples/accel/perf/accel_perf.o 00:04:55.889 CC app/spdk_nvme_discover/discovery_aer.o 00:04:56.149 CXX test/cpp_headers/blobfs.o 00:04:56.149 CXX test/cpp_headers/blob.o 00:04:56.149 LINK spdk_nvme_discover 00:04:56.149 CC test/accel/dif/dif.o 00:04:56.408 CXX test/cpp_headers/conf.o 00:04:56.408 CC examples/blob/hello_world/hello_blob.o 00:04:56.408 CXX test/cpp_headers/config.o 00:04:56.408 CC examples/blob/cli/blobcli.o 00:04:56.408 CXX test/cpp_headers/cpuset.o 00:04:56.408 CC app/spdk_top/spdk_top.o 00:04:56.408 LINK spdk_nvme_perf 00:04:56.408 LINK accel_perf 00:04:56.668 CXX test/cpp_headers/crc16.o 00:04:56.668 LINK hello_blob 00:04:56.668 LINK memory_ut 00:04:56.668 CXX test/cpp_headers/crc32.o 00:04:56.927 CXX test/cpp_headers/crc64.o 00:04:56.927 LINK spdk_nvme_identify 00:04:56.927 CC test/blobfs/mkfs/mkfs.o 00:04:56.927 LINK blobcli 00:04:56.927 CC test/event/event_perf/event_perf.o 00:04:56.927 CC app/vhost/vhost.o 00:04:57.187 CXX test/cpp_headers/dif.o 00:04:57.187 CC test/lvol/esnap/esnap.o 00:04:57.187 LINK dif 00:04:57.187 LINK event_perf 00:04:57.187 LINK mkfs 00:04:57.187 LINK vhost 00:04:57.187 CXX test/cpp_headers/dma.o 00:04:57.187 LINK iscsi_fuzz 00:04:57.187 CC examples/nvme/hello_world/hello_world.o 00:04:57.446 CC examples/bdev/hello_world/hello_bdev.o 00:04:57.446 CC test/event/reactor/reactor.o 00:04:57.446 CXX test/cpp_headers/endian.o 00:04:57.446 CC test/nvme/aer/aer.o 00:04:57.446 LINK reactor 00:04:57.446 CC examples/nvme/reconnect/reconnect.o 00:04:57.446 LINK hello_world 00:04:57.446 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:57.446 CC test/bdev/bdevio/bdevio.o 00:04:57.705 LINK spdk_top 00:04:57.705 CXX test/cpp_headers/env_dpdk.o 00:04:57.705 LINK hello_bdev 00:04:57.705 CC test/event/reactor_perf/reactor_perf.o 00:04:57.705 CXX test/cpp_headers/env.o 00:04:57.705 CC examples/nvme/arbitration/arbitration.o 00:04:57.965 LINK aer 00:04:57.965 CC app/spdk_dd/spdk_dd.o 00:04:57.965 LINK reactor_perf 00:04:57.965 LINK reconnect 00:04:57.965 CXX test/cpp_headers/event.o 00:04:57.965 CC examples/bdev/bdevperf/bdevperf.o 00:04:57.965 LINK bdevio 00:04:58.225 CXX test/cpp_headers/fd_group.o 00:04:58.225 CC test/nvme/reset/reset.o 00:04:58.225 CC test/event/app_repeat/app_repeat.o 00:04:58.225 LINK nvme_manage 00:04:58.225 LINK arbitration 00:04:58.225 CC examples/nvme/hotplug/hotplug.o 00:04:58.225 CXX test/cpp_headers/fd.o 00:04:58.225 LINK spdk_dd 00:04:58.225 LINK app_repeat 00:04:58.225 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:58.484 CXX test/cpp_headers/file.o 00:04:58.484 LINK reset 00:04:58.484 CC examples/nvme/abort/abort.o 00:04:58.484 LINK hotplug 00:04:58.484 CC test/event/scheduler/scheduler.o 00:04:58.484 LINK cmb_copy 00:04:58.484 CXX test/cpp_headers/fsdev.o 00:04:58.484 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:58.743 CXX test/cpp_headers/fsdev_module.o 00:04:58.743 CC app/fio/nvme/fio_plugin.o 00:04:58.743 CC test/nvme/sgl/sgl.o 00:04:58.743 CXX test/cpp_headers/ftl.o 00:04:58.743 LINK scheduler 00:04:58.743 CXX test/cpp_headers/fuse_dispatcher.o 00:04:58.743 LINK pmr_persistence 00:04:58.743 CXX test/cpp_headers/gpt_spec.o 00:04:58.743 LINK abort 00:04:58.743 LINK bdevperf 00:04:59.085 CXX test/cpp_headers/hexlify.o 00:04:59.085 CXX test/cpp_headers/histogram_data.o 00:04:59.085 LINK sgl 00:04:59.085 CXX test/cpp_headers/idxd.o 00:04:59.085 CC test/nvme/e2edp/nvme_dp.o 00:04:59.085 CC test/nvme/overhead/overhead.o 00:04:59.085 CXX test/cpp_headers/idxd_spec.o 00:04:59.085 CC app/fio/bdev/fio_plugin.o 00:04:59.085 CXX test/cpp_headers/init.o 00:04:59.085 CC test/nvme/err_injection/err_injection.o 00:04:59.345 CC test/nvme/startup/startup.o 00:04:59.345 CXX test/cpp_headers/ioat.o 00:04:59.345 LINK spdk_nvme 00:04:59.345 CC examples/nvmf/nvmf/nvmf.o 00:04:59.345 LINK nvme_dp 00:04:59.345 LINK overhead 00:04:59.345 CXX test/cpp_headers/ioat_spec.o 00:04:59.345 LINK startup 00:04:59.345 LINK err_injection 00:04:59.345 CC test/nvme/reserve/reserve.o 00:04:59.345 CXX test/cpp_headers/iscsi_spec.o 00:04:59.604 CXX test/cpp_headers/json.o 00:04:59.604 CXX test/cpp_headers/jsonrpc.o 00:04:59.604 CC test/nvme/simple_copy/simple_copy.o 00:04:59.604 LINK reserve 00:04:59.604 LINK nvmf 00:04:59.604 CC test/nvme/connect_stress/connect_stress.o 00:04:59.604 CC test/nvme/compliance/nvme_compliance.o 00:04:59.604 CC test/nvme/boot_partition/boot_partition.o 00:04:59.605 CXX test/cpp_headers/keyring.o 00:04:59.864 LINK spdk_bdev 00:04:59.864 CC test/nvme/fused_ordering/fused_ordering.o 00:04:59.864 CXX test/cpp_headers/keyring_module.o 00:04:59.864 LINK boot_partition 00:04:59.864 LINK connect_stress 00:04:59.864 CXX test/cpp_headers/likely.o 00:04:59.864 LINK simple_copy 00:04:59.864 CXX test/cpp_headers/log.o 00:04:59.864 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:00.122 LINK fused_ordering 00:05:00.122 CXX test/cpp_headers/lvol.o 00:05:00.122 CXX test/cpp_headers/md5.o 00:05:00.122 LINK nvme_compliance 00:05:00.122 CXX test/cpp_headers/memory.o 00:05:00.122 CC test/nvme/fdp/fdp.o 00:05:00.122 CXX test/cpp_headers/mmio.o 00:05:00.122 CC test/nvme/cuse/cuse.o 00:05:00.122 CXX test/cpp_headers/nbd.o 00:05:00.381 LINK doorbell_aers 00:05:00.381 CXX test/cpp_headers/net.o 00:05:00.381 CXX test/cpp_headers/notify.o 00:05:00.381 CXX test/cpp_headers/nvme.o 00:05:00.381 CXX test/cpp_headers/nvme_intel.o 00:05:00.381 CXX test/cpp_headers/nvme_ocssd.o 00:05:00.381 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:00.381 CXX test/cpp_headers/nvme_spec.o 00:05:00.381 CXX test/cpp_headers/nvme_zns.o 00:05:00.381 CXX test/cpp_headers/nvmf_cmd.o 00:05:00.381 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:00.381 CXX test/cpp_headers/nvmf.o 00:05:00.381 CXX test/cpp_headers/nvmf_spec.o 00:05:00.381 CXX test/cpp_headers/nvmf_transport.o 00:05:00.640 LINK fdp 00:05:00.640 CXX test/cpp_headers/opal.o 00:05:00.640 CXX test/cpp_headers/opal_spec.o 00:05:00.640 CXX test/cpp_headers/pci_ids.o 00:05:00.640 CXX test/cpp_headers/pipe.o 00:05:00.640 CXX test/cpp_headers/queue.o 00:05:00.640 CXX test/cpp_headers/reduce.o 00:05:00.640 CXX test/cpp_headers/rpc.o 00:05:00.640 CXX test/cpp_headers/scheduler.o 00:05:00.640 CXX test/cpp_headers/scsi.o 00:05:00.899 CXX test/cpp_headers/scsi_spec.o 00:05:00.899 CXX test/cpp_headers/sock.o 00:05:00.899 CXX test/cpp_headers/stdinc.o 00:05:00.899 CXX test/cpp_headers/string.o 00:05:00.899 CXX test/cpp_headers/thread.o 00:05:00.899 CXX test/cpp_headers/trace.o 00:05:00.899 CXX test/cpp_headers/trace_parser.o 00:05:00.899 CXX test/cpp_headers/tree.o 00:05:00.899 CXX test/cpp_headers/ublk.o 00:05:00.899 CXX test/cpp_headers/util.o 00:05:00.899 CXX test/cpp_headers/uuid.o 00:05:00.899 CXX test/cpp_headers/version.o 00:05:00.899 CXX test/cpp_headers/vfio_user_pci.o 00:05:01.156 CXX test/cpp_headers/vfio_user_spec.o 00:05:01.156 CXX test/cpp_headers/vhost.o 00:05:01.156 CXX test/cpp_headers/vmd.o 00:05:01.156 CXX test/cpp_headers/xor.o 00:05:01.156 CXX test/cpp_headers/zipf.o 00:05:01.725 LINK cuse 00:05:03.627 LINK esnap 00:05:04.195 00:05:04.195 real 1m35.844s 00:05:04.195 user 8m25.522s 00:05:04.195 sys 1m49.249s 00:05:04.195 16:19:16 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:05:04.195 16:19:16 make -- common/autotest_common.sh@10 -- $ set +x 00:05:04.195 ************************************ 00:05:04.195 END TEST make 00:05:04.195 ************************************ 00:05:04.195 16:19:17 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:04.195 16:19:17 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:04.195 16:19:17 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:04.195 16:19:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:04.195 16:19:17 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:04.195 16:19:17 -- pm/common@44 -- $ pid=5469 00:05:04.195 16:19:17 -- pm/common@50 -- $ kill -TERM 5469 00:05:04.195 16:19:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:04.195 16:19:17 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:04.195 16:19:17 -- pm/common@44 -- $ pid=5471 00:05:04.195 16:19:17 -- pm/common@50 -- $ kill -TERM 5471 00:05:04.195 16:19:17 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:04.195 16:19:17 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:04.195 16:19:17 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:04.195 16:19:17 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:04.195 16:19:17 -- common/autotest_common.sh@1691 -- # lcov --version 00:05:04.195 16:19:17 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:04.195 16:19:17 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.195 16:19:17 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.195 16:19:17 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.195 16:19:17 -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.195 16:19:17 -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.195 16:19:17 -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.195 16:19:17 -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.195 16:19:17 -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.195 16:19:17 -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.195 16:19:17 -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.195 16:19:17 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.195 16:19:17 -- scripts/common.sh@344 -- # case "$op" in 00:05:04.195 16:19:17 -- scripts/common.sh@345 -- # : 1 00:05:04.195 16:19:17 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.195 16:19:17 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.195 16:19:17 -- scripts/common.sh@365 -- # decimal 1 00:05:04.195 16:19:17 -- scripts/common.sh@353 -- # local d=1 00:05:04.196 16:19:17 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.196 16:19:17 -- scripts/common.sh@355 -- # echo 1 00:05:04.196 16:19:17 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.196 16:19:17 -- scripts/common.sh@366 -- # decimal 2 00:05:04.196 16:19:17 -- scripts/common.sh@353 -- # local d=2 00:05:04.196 16:19:17 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.196 16:19:17 -- scripts/common.sh@355 -- # echo 2 00:05:04.196 16:19:17 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.196 16:19:17 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.196 16:19:17 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.196 16:19:17 -- scripts/common.sh@368 -- # return 0 00:05:04.196 16:19:17 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.196 16:19:17 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:04.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.196 --rc genhtml_branch_coverage=1 00:05:04.196 --rc genhtml_function_coverage=1 00:05:04.196 --rc genhtml_legend=1 00:05:04.196 --rc geninfo_all_blocks=1 00:05:04.196 --rc geninfo_unexecuted_blocks=1 00:05:04.196 00:05:04.196 ' 00:05:04.196 16:19:17 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:04.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.196 --rc genhtml_branch_coverage=1 00:05:04.196 --rc genhtml_function_coverage=1 00:05:04.196 --rc genhtml_legend=1 00:05:04.196 --rc geninfo_all_blocks=1 00:05:04.196 --rc geninfo_unexecuted_blocks=1 00:05:04.196 00:05:04.196 ' 00:05:04.196 16:19:17 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:04.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.196 --rc genhtml_branch_coverage=1 00:05:04.196 --rc genhtml_function_coverage=1 00:05:04.196 --rc genhtml_legend=1 00:05:04.196 --rc geninfo_all_blocks=1 00:05:04.196 --rc geninfo_unexecuted_blocks=1 00:05:04.196 00:05:04.196 ' 00:05:04.196 16:19:17 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:04.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.196 --rc genhtml_branch_coverage=1 00:05:04.196 --rc genhtml_function_coverage=1 00:05:04.196 --rc genhtml_legend=1 00:05:04.196 --rc geninfo_all_blocks=1 00:05:04.196 --rc geninfo_unexecuted_blocks=1 00:05:04.196 00:05:04.196 ' 00:05:04.196 16:19:17 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:04.455 16:19:17 -- nvmf/common.sh@7 -- # uname -s 00:05:04.455 16:19:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:04.455 16:19:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:04.455 16:19:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:04.455 16:19:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:04.455 16:19:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:04.455 16:19:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:04.455 16:19:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:04.455 16:19:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:04.455 16:19:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:04.455 16:19:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:04.455 16:19:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:28c63530-75cb-4ffa-be40-6c238887710c 00:05:04.455 16:19:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=28c63530-75cb-4ffa-be40-6c238887710c 00:05:04.455 16:19:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:04.455 16:19:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:04.455 16:19:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:04.455 16:19:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:04.455 16:19:17 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:04.455 16:19:17 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:04.455 16:19:17 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:04.455 16:19:17 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:04.455 16:19:17 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:04.456 16:19:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.456 16:19:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.456 16:19:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.456 16:19:17 -- paths/export.sh@5 -- # export PATH 00:05:04.456 16:19:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.456 16:19:17 -- nvmf/common.sh@51 -- # : 0 00:05:04.456 16:19:17 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:04.456 16:19:17 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:04.456 16:19:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:04.456 16:19:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:04.456 16:19:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:04.456 16:19:17 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:04.456 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:04.456 16:19:17 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:04.456 16:19:17 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:04.456 16:19:17 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:04.456 16:19:17 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:04.456 16:19:17 -- spdk/autotest.sh@32 -- # uname -s 00:05:04.456 16:19:17 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:04.456 16:19:17 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:04.456 16:19:17 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:04.456 16:19:17 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:04.456 16:19:17 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:04.456 16:19:17 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:04.456 16:19:17 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:04.456 16:19:17 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:04.456 16:19:17 -- spdk/autotest.sh@48 -- # udevadm_pid=54565 00:05:04.456 16:19:17 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:04.456 16:19:17 -- pm/common@17 -- # local monitor 00:05:04.456 16:19:17 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:04.456 16:19:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:04.456 16:19:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:04.456 16:19:17 -- pm/common@21 -- # date +%s 00:05:04.456 16:19:17 -- pm/common@21 -- # date +%s 00:05:04.456 16:19:17 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730823557 00:05:04.456 16:19:17 -- pm/common@25 -- # sleep 1 00:05:04.456 16:19:17 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730823557 00:05:04.456 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730823557_collect-cpu-load.pm.log 00:05:04.456 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730823557_collect-vmstat.pm.log 00:05:05.394 16:19:18 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:05.395 16:19:18 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:05.395 16:19:18 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:05.395 16:19:18 -- common/autotest_common.sh@10 -- # set +x 00:05:05.395 16:19:18 -- spdk/autotest.sh@59 -- # create_test_list 00:05:05.395 16:19:18 -- common/autotest_common.sh@750 -- # xtrace_disable 00:05:05.395 16:19:18 -- common/autotest_common.sh@10 -- # set +x 00:05:05.395 16:19:18 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:05.655 16:19:18 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:05.655 16:19:18 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:05.655 16:19:18 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:05.655 16:19:18 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:05.655 16:19:18 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:05.655 16:19:18 -- common/autotest_common.sh@1455 -- # uname 00:05:05.655 16:19:18 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:05.655 16:19:18 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:05.655 16:19:18 -- common/autotest_common.sh@1475 -- # uname 00:05:05.655 16:19:18 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:05.655 16:19:18 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:05.655 16:19:18 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:05.655 lcov: LCOV version 1.15 00:05:05.655 16:19:18 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:23.799 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:23.799 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:41.903 16:19:51 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:41.903 16:19:51 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:41.903 16:19:51 -- common/autotest_common.sh@10 -- # set +x 00:05:41.903 16:19:51 -- spdk/autotest.sh@78 -- # rm -f 00:05:41.903 16:19:51 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:41.903 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:41.903 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:41.903 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:41.903 16:19:52 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:41.903 16:19:52 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:41.903 16:19:52 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:41.903 16:19:52 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:41.903 16:19:52 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:41.903 16:19:52 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:41.903 16:19:52 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:41.903 16:19:52 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:41.903 16:19:52 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:41.903 16:19:52 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:41.903 16:19:52 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:41.903 16:19:52 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:41.903 16:19:52 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:41.903 16:19:52 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:41.903 16:19:52 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:41.903 16:19:52 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:05:41.903 16:19:52 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:05:41.903 16:19:52 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:41.903 16:19:52 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:41.903 16:19:52 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:41.903 16:19:52 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:05:41.903 16:19:52 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:05:41.903 16:19:52 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:41.903 16:19:52 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:41.903 16:19:52 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:41.903 16:19:52 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:41.903 16:19:52 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:41.903 16:19:52 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:41.903 16:19:52 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:41.903 16:19:52 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:41.903 No valid GPT data, bailing 00:05:41.903 16:19:52 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:41.903 16:19:52 -- scripts/common.sh@394 -- # pt= 00:05:41.903 16:19:52 -- scripts/common.sh@395 -- # return 1 00:05:41.903 16:19:52 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:41.903 1+0 records in 00:05:41.903 1+0 records out 00:05:41.903 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00554162 s, 189 MB/s 00:05:41.903 16:19:52 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:41.903 16:19:52 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:41.903 16:19:52 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:41.903 16:19:52 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:41.903 16:19:52 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:41.903 No valid GPT data, bailing 00:05:41.903 16:19:52 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:41.903 16:19:52 -- scripts/common.sh@394 -- # pt= 00:05:41.903 16:19:52 -- scripts/common.sh@395 -- # return 1 00:05:41.903 16:19:52 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:41.903 1+0 records in 00:05:41.903 1+0 records out 00:05:41.903 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00476239 s, 220 MB/s 00:05:41.903 16:19:52 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:41.903 16:19:52 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:41.903 16:19:52 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:41.903 16:19:52 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:41.903 16:19:52 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:41.903 No valid GPT data, bailing 00:05:41.903 16:19:52 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:41.903 16:19:52 -- scripts/common.sh@394 -- # pt= 00:05:41.903 16:19:52 -- scripts/common.sh@395 -- # return 1 00:05:41.903 16:19:53 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:41.903 1+0 records in 00:05:41.903 1+0 records out 00:05:41.903 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00805033 s, 130 MB/s 00:05:41.903 16:19:53 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:41.903 16:19:53 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:41.903 16:19:53 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:41.903 16:19:53 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:41.903 16:19:53 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:41.903 No valid GPT data, bailing 00:05:41.903 16:19:53 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:41.903 16:19:53 -- scripts/common.sh@394 -- # pt= 00:05:41.903 16:19:53 -- scripts/common.sh@395 -- # return 1 00:05:41.903 16:19:53 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:41.903 1+0 records in 00:05:41.903 1+0 records out 00:05:41.903 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00663812 s, 158 MB/s 00:05:41.903 16:19:53 -- spdk/autotest.sh@105 -- # sync 00:05:41.904 16:19:53 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:41.904 16:19:53 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:41.904 16:19:53 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:42.842 16:19:55 -- spdk/autotest.sh@111 -- # uname -s 00:05:42.842 16:19:55 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:42.842 16:19:55 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:42.842 16:19:55 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:43.410 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:43.410 Hugepages 00:05:43.410 node hugesize free / total 00:05:43.410 node0 1048576kB 0 / 0 00:05:43.410 node0 2048kB 0 / 0 00:05:43.410 00:05:43.410 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:43.410 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:43.670 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:43.670 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:43.670 16:19:56 -- spdk/autotest.sh@117 -- # uname -s 00:05:43.670 16:19:56 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:43.670 16:19:56 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:43.670 16:19:56 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:44.606 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:44.606 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:44.606 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:44.606 16:19:57 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:45.983 16:19:58 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:45.983 16:19:58 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:45.983 16:19:58 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:45.983 16:19:58 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:45.983 16:19:58 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:45.983 16:19:58 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:45.983 16:19:58 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:45.983 16:19:58 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:45.983 16:19:58 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:45.983 16:19:58 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:45.983 16:19:58 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:45.984 16:19:58 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:46.264 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:46.264 Waiting for block devices as requested 00:05:46.264 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:46.525 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:46.525 16:19:59 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:46.525 16:19:59 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:46.525 16:19:59 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:46.525 16:19:59 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:46.525 16:19:59 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:46.525 16:19:59 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:46.525 16:19:59 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:46.525 16:19:59 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:46.525 16:19:59 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:46.525 16:19:59 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:46.525 16:19:59 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:46.525 16:19:59 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:46.525 16:19:59 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:46.525 16:19:59 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:46.525 16:19:59 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:46.525 16:19:59 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:46.525 16:19:59 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:46.525 16:19:59 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:46.525 16:19:59 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:46.525 16:19:59 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:46.525 16:19:59 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:46.525 16:19:59 -- common/autotest_common.sh@1541 -- # continue 00:05:46.525 16:19:59 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:46.525 16:19:59 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:46.525 16:19:59 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:46.525 16:19:59 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:46.525 16:19:59 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:46.525 16:19:59 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:46.525 16:19:59 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:46.525 16:19:59 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:46.525 16:19:59 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:46.525 16:19:59 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:46.525 16:19:59 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:46.525 16:19:59 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:46.525 16:19:59 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:46.525 16:19:59 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:46.525 16:19:59 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:46.525 16:19:59 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:46.525 16:19:59 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:46.525 16:19:59 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:46.525 16:19:59 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:46.525 16:19:59 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:46.525 16:19:59 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:46.525 16:19:59 -- common/autotest_common.sh@1541 -- # continue 00:05:46.525 16:19:59 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:46.525 16:19:59 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:46.525 16:19:59 -- common/autotest_common.sh@10 -- # set +x 00:05:46.525 16:19:59 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:46.525 16:19:59 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:46.525 16:19:59 -- common/autotest_common.sh@10 -- # set +x 00:05:46.525 16:19:59 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:47.462 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:47.462 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:47.462 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:47.720 16:20:00 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:47.720 16:20:00 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:47.720 16:20:00 -- common/autotest_common.sh@10 -- # set +x 00:05:47.720 16:20:00 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:47.720 16:20:00 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:47.720 16:20:00 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:47.720 16:20:00 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:47.720 16:20:00 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:47.720 16:20:00 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:47.720 16:20:00 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:47.720 16:20:00 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:47.720 16:20:00 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:47.720 16:20:00 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:47.720 16:20:00 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:47.720 16:20:00 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:47.720 16:20:00 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:47.720 16:20:00 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:47.720 16:20:00 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:47.720 16:20:00 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:47.720 16:20:00 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:47.720 16:20:00 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:47.720 16:20:00 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:47.720 16:20:00 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:47.720 16:20:00 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:47.720 16:20:00 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:47.720 16:20:00 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:47.720 16:20:00 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:47.720 16:20:00 -- common/autotest_common.sh@1570 -- # return 0 00:05:47.720 16:20:00 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:47.720 16:20:00 -- common/autotest_common.sh@1578 -- # return 0 00:05:47.720 16:20:00 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:47.720 16:20:00 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:47.720 16:20:00 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:47.720 16:20:00 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:47.720 16:20:00 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:47.720 16:20:00 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:47.720 16:20:00 -- common/autotest_common.sh@10 -- # set +x 00:05:47.720 16:20:00 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:47.720 16:20:00 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:47.720 16:20:00 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:47.720 16:20:00 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:47.720 16:20:00 -- common/autotest_common.sh@10 -- # set +x 00:05:47.720 ************************************ 00:05:47.720 START TEST env 00:05:47.720 ************************************ 00:05:47.720 16:20:00 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:47.978 * Looking for test storage... 00:05:47.978 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:47.978 16:20:00 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:47.978 16:20:00 env -- common/autotest_common.sh@1691 -- # lcov --version 00:05:47.978 16:20:00 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:47.978 16:20:00 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:47.978 16:20:00 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.978 16:20:00 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.978 16:20:00 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.978 16:20:00 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.978 16:20:00 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.978 16:20:00 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.978 16:20:00 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.978 16:20:00 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.978 16:20:00 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.978 16:20:00 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.978 16:20:00 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.978 16:20:00 env -- scripts/common.sh@344 -- # case "$op" in 00:05:47.978 16:20:00 env -- scripts/common.sh@345 -- # : 1 00:05:47.978 16:20:00 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.978 16:20:00 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.978 16:20:00 env -- scripts/common.sh@365 -- # decimal 1 00:05:47.978 16:20:00 env -- scripts/common.sh@353 -- # local d=1 00:05:47.978 16:20:00 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.978 16:20:00 env -- scripts/common.sh@355 -- # echo 1 00:05:47.978 16:20:00 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.978 16:20:00 env -- scripts/common.sh@366 -- # decimal 2 00:05:47.978 16:20:00 env -- scripts/common.sh@353 -- # local d=2 00:05:47.978 16:20:00 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.978 16:20:00 env -- scripts/common.sh@355 -- # echo 2 00:05:47.978 16:20:00 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.978 16:20:00 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.978 16:20:00 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.978 16:20:00 env -- scripts/common.sh@368 -- # return 0 00:05:47.978 16:20:00 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.978 16:20:00 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:47.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.978 --rc genhtml_branch_coverage=1 00:05:47.978 --rc genhtml_function_coverage=1 00:05:47.978 --rc genhtml_legend=1 00:05:47.978 --rc geninfo_all_blocks=1 00:05:47.978 --rc geninfo_unexecuted_blocks=1 00:05:47.978 00:05:47.978 ' 00:05:47.978 16:20:00 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:47.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.978 --rc genhtml_branch_coverage=1 00:05:47.978 --rc genhtml_function_coverage=1 00:05:47.978 --rc genhtml_legend=1 00:05:47.978 --rc geninfo_all_blocks=1 00:05:47.978 --rc geninfo_unexecuted_blocks=1 00:05:47.978 00:05:47.978 ' 00:05:47.978 16:20:00 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:47.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.978 --rc genhtml_branch_coverage=1 00:05:47.978 --rc genhtml_function_coverage=1 00:05:47.978 --rc genhtml_legend=1 00:05:47.978 --rc geninfo_all_blocks=1 00:05:47.978 --rc geninfo_unexecuted_blocks=1 00:05:47.978 00:05:47.978 ' 00:05:47.978 16:20:00 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:47.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.978 --rc genhtml_branch_coverage=1 00:05:47.978 --rc genhtml_function_coverage=1 00:05:47.978 --rc genhtml_legend=1 00:05:47.978 --rc geninfo_all_blocks=1 00:05:47.978 --rc geninfo_unexecuted_blocks=1 00:05:47.978 00:05:47.978 ' 00:05:47.978 16:20:00 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:47.978 16:20:00 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:47.978 16:20:00 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:47.978 16:20:00 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.978 ************************************ 00:05:47.978 START TEST env_memory 00:05:47.978 ************************************ 00:05:47.978 16:20:01 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:47.978 00:05:47.978 00:05:47.978 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.978 http://cunit.sourceforge.net/ 00:05:47.978 00:05:47.978 00:05:47.978 Suite: memory 00:05:48.236 Test: alloc and free memory map ...[2024-11-05 16:20:01.082388] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:48.236 passed 00:05:48.236 Test: mem map translation ...[2024-11-05 16:20:01.130243] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:48.236 [2024-11-05 16:20:01.130360] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:48.236 [2024-11-05 16:20:01.130472] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:48.236 [2024-11-05 16:20:01.130602] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:48.236 passed 00:05:48.236 Test: mem map registration ...[2024-11-05 16:20:01.216027] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:48.236 [2024-11-05 16:20:01.216237] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:48.236 passed 00:05:48.236 Test: mem map adjacent registrations ...passed 00:05:48.236 00:05:48.236 Run Summary: Type Total Ran Passed Failed Inactive 00:05:48.236 suites 1 1 n/a 0 0 00:05:48.236 tests 4 4 4 0 0 00:05:48.236 asserts 152 152 152 0 n/a 00:05:48.236 00:05:48.236 Elapsed time = 0.281 seconds 00:05:48.495 00:05:48.495 real 0m0.324s 00:05:48.495 user 0m0.293s 00:05:48.495 sys 0m0.021s 00:05:48.495 16:20:01 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:48.495 16:20:01 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:48.495 ************************************ 00:05:48.495 END TEST env_memory 00:05:48.495 ************************************ 00:05:48.495 16:20:01 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:48.495 16:20:01 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:48.495 16:20:01 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:48.495 16:20:01 env -- common/autotest_common.sh@10 -- # set +x 00:05:48.495 ************************************ 00:05:48.495 START TEST env_vtophys 00:05:48.495 ************************************ 00:05:48.495 16:20:01 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:48.495 EAL: lib.eal log level changed from notice to debug 00:05:48.495 EAL: Detected lcore 0 as core 0 on socket 0 00:05:48.495 EAL: Detected lcore 1 as core 0 on socket 0 00:05:48.495 EAL: Detected lcore 2 as core 0 on socket 0 00:05:48.495 EAL: Detected lcore 3 as core 0 on socket 0 00:05:48.495 EAL: Detected lcore 4 as core 0 on socket 0 00:05:48.495 EAL: Detected lcore 5 as core 0 on socket 0 00:05:48.495 EAL: Detected lcore 6 as core 0 on socket 0 00:05:48.495 EAL: Detected lcore 7 as core 0 on socket 0 00:05:48.495 EAL: Detected lcore 8 as core 0 on socket 0 00:05:48.495 EAL: Detected lcore 9 as core 0 on socket 0 00:05:48.495 EAL: Maximum logical cores by configuration: 128 00:05:48.495 EAL: Detected CPU lcores: 10 00:05:48.495 EAL: Detected NUMA nodes: 1 00:05:48.495 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:48.495 EAL: Detected shared linkage of DPDK 00:05:48.495 EAL: No shared files mode enabled, IPC will be disabled 00:05:48.495 EAL: Selected IOVA mode 'PA' 00:05:48.495 EAL: Probing VFIO support... 00:05:48.495 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:48.495 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:48.495 EAL: Ask a virtual area of 0x2e000 bytes 00:05:48.495 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:48.495 EAL: Setting up physically contiguous memory... 00:05:48.495 EAL: Setting maximum number of open files to 524288 00:05:48.495 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:48.495 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:48.495 EAL: Ask a virtual area of 0x61000 bytes 00:05:48.495 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:48.495 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:48.495 EAL: Ask a virtual area of 0x400000000 bytes 00:05:48.495 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:48.495 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:48.495 EAL: Ask a virtual area of 0x61000 bytes 00:05:48.495 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:48.495 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:48.495 EAL: Ask a virtual area of 0x400000000 bytes 00:05:48.495 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:48.495 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:48.495 EAL: Ask a virtual area of 0x61000 bytes 00:05:48.495 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:48.495 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:48.495 EAL: Ask a virtual area of 0x400000000 bytes 00:05:48.495 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:48.495 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:48.495 EAL: Ask a virtual area of 0x61000 bytes 00:05:48.495 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:48.495 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:48.495 EAL: Ask a virtual area of 0x400000000 bytes 00:05:48.495 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:48.495 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:48.495 EAL: Hugepages will be freed exactly as allocated. 00:05:48.495 EAL: No shared files mode enabled, IPC is disabled 00:05:48.495 EAL: No shared files mode enabled, IPC is disabled 00:05:48.755 EAL: TSC frequency is ~2290000 KHz 00:05:48.755 EAL: Main lcore 0 is ready (tid=7fa14bcb6a40;cpuset=[0]) 00:05:48.755 EAL: Trying to obtain current memory policy. 00:05:48.755 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.755 EAL: Restoring previous memory policy: 0 00:05:48.755 EAL: request: mp_malloc_sync 00:05:48.755 EAL: No shared files mode enabled, IPC is disabled 00:05:48.755 EAL: Heap on socket 0 was expanded by 2MB 00:05:48.755 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:48.755 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:48.755 EAL: Mem event callback 'spdk:(nil)' registered 00:05:48.755 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:48.755 00:05:48.755 00:05:48.755 CUnit - A unit testing framework for C - Version 2.1-3 00:05:48.755 http://cunit.sourceforge.net/ 00:05:48.755 00:05:48.755 00:05:48.755 Suite: components_suite 00:05:49.013 Test: vtophys_malloc_test ...passed 00:05:49.013 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:49.013 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.013 EAL: Restoring previous memory policy: 4 00:05:49.013 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.013 EAL: request: mp_malloc_sync 00:05:49.013 EAL: No shared files mode enabled, IPC is disabled 00:05:49.013 EAL: Heap on socket 0 was expanded by 4MB 00:05:49.013 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.013 EAL: request: mp_malloc_sync 00:05:49.013 EAL: No shared files mode enabled, IPC is disabled 00:05:49.013 EAL: Heap on socket 0 was shrunk by 4MB 00:05:49.013 EAL: Trying to obtain current memory policy. 00:05:49.013 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.013 EAL: Restoring previous memory policy: 4 00:05:49.013 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.013 EAL: request: mp_malloc_sync 00:05:49.013 EAL: No shared files mode enabled, IPC is disabled 00:05:49.013 EAL: Heap on socket 0 was expanded by 6MB 00:05:49.013 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.013 EAL: request: mp_malloc_sync 00:05:49.013 EAL: No shared files mode enabled, IPC is disabled 00:05:49.013 EAL: Heap on socket 0 was shrunk by 6MB 00:05:49.013 EAL: Trying to obtain current memory policy. 00:05:49.013 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.013 EAL: Restoring previous memory policy: 4 00:05:49.013 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.013 EAL: request: mp_malloc_sync 00:05:49.013 EAL: No shared files mode enabled, IPC is disabled 00:05:49.013 EAL: Heap on socket 0 was expanded by 10MB 00:05:49.013 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.013 EAL: request: mp_malloc_sync 00:05:49.013 EAL: No shared files mode enabled, IPC is disabled 00:05:49.013 EAL: Heap on socket 0 was shrunk by 10MB 00:05:49.013 EAL: Trying to obtain current memory policy. 00:05:49.013 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.013 EAL: Restoring previous memory policy: 4 00:05:49.013 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.013 EAL: request: mp_malloc_sync 00:05:49.013 EAL: No shared files mode enabled, IPC is disabled 00:05:49.013 EAL: Heap on socket 0 was expanded by 18MB 00:05:49.272 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.272 EAL: request: mp_malloc_sync 00:05:49.272 EAL: No shared files mode enabled, IPC is disabled 00:05:49.272 EAL: Heap on socket 0 was shrunk by 18MB 00:05:49.272 EAL: Trying to obtain current memory policy. 00:05:49.272 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.272 EAL: Restoring previous memory policy: 4 00:05:49.272 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.272 EAL: request: mp_malloc_sync 00:05:49.272 EAL: No shared files mode enabled, IPC is disabled 00:05:49.272 EAL: Heap on socket 0 was expanded by 34MB 00:05:49.272 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.272 EAL: request: mp_malloc_sync 00:05:49.272 EAL: No shared files mode enabled, IPC is disabled 00:05:49.272 EAL: Heap on socket 0 was shrunk by 34MB 00:05:49.272 EAL: Trying to obtain current memory policy. 00:05:49.272 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.272 EAL: Restoring previous memory policy: 4 00:05:49.272 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.272 EAL: request: mp_malloc_sync 00:05:49.272 EAL: No shared files mode enabled, IPC is disabled 00:05:49.272 EAL: Heap on socket 0 was expanded by 66MB 00:05:49.532 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.532 EAL: request: mp_malloc_sync 00:05:49.532 EAL: No shared files mode enabled, IPC is disabled 00:05:49.532 EAL: Heap on socket 0 was shrunk by 66MB 00:05:49.532 EAL: Trying to obtain current memory policy. 00:05:49.532 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.532 EAL: Restoring previous memory policy: 4 00:05:49.532 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.532 EAL: request: mp_malloc_sync 00:05:49.532 EAL: No shared files mode enabled, IPC is disabled 00:05:49.532 EAL: Heap on socket 0 was expanded by 130MB 00:05:49.792 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.792 EAL: request: mp_malloc_sync 00:05:49.792 EAL: No shared files mode enabled, IPC is disabled 00:05:49.792 EAL: Heap on socket 0 was shrunk by 130MB 00:05:50.051 EAL: Trying to obtain current memory policy. 00:05:50.051 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.051 EAL: Restoring previous memory policy: 4 00:05:50.051 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.051 EAL: request: mp_malloc_sync 00:05:50.051 EAL: No shared files mode enabled, IPC is disabled 00:05:50.051 EAL: Heap on socket 0 was expanded by 258MB 00:05:50.620 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.620 EAL: request: mp_malloc_sync 00:05:50.620 EAL: No shared files mode enabled, IPC is disabled 00:05:50.620 EAL: Heap on socket 0 was shrunk by 258MB 00:05:51.186 EAL: Trying to obtain current memory policy. 00:05:51.186 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.186 EAL: Restoring previous memory policy: 4 00:05:51.186 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.186 EAL: request: mp_malloc_sync 00:05:51.186 EAL: No shared files mode enabled, IPC is disabled 00:05:51.186 EAL: Heap on socket 0 was expanded by 514MB 00:05:52.124 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.382 EAL: request: mp_malloc_sync 00:05:52.382 EAL: No shared files mode enabled, IPC is disabled 00:05:52.382 EAL: Heap on socket 0 was shrunk by 514MB 00:05:53.317 EAL: Trying to obtain current memory policy. 00:05:53.317 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:53.575 EAL: Restoring previous memory policy: 4 00:05:53.575 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.575 EAL: request: mp_malloc_sync 00:05:53.575 EAL: No shared files mode enabled, IPC is disabled 00:05:53.575 EAL: Heap on socket 0 was expanded by 1026MB 00:05:55.477 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.477 EAL: request: mp_malloc_sync 00:05:55.477 EAL: No shared files mode enabled, IPC is disabled 00:05:55.477 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:57.381 passed 00:05:57.381 00:05:57.381 Run Summary: Type Total Ran Passed Failed Inactive 00:05:57.381 suites 1 1 n/a 0 0 00:05:57.381 tests 2 2 2 0 0 00:05:57.381 asserts 5796 5796 5796 0 n/a 00:05:57.381 00:05:57.381 Elapsed time = 8.719 seconds 00:05:57.381 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.381 EAL: request: mp_malloc_sync 00:05:57.381 EAL: No shared files mode enabled, IPC is disabled 00:05:57.381 EAL: Heap on socket 0 was shrunk by 2MB 00:05:57.381 EAL: No shared files mode enabled, IPC is disabled 00:05:57.381 EAL: No shared files mode enabled, IPC is disabled 00:05:57.381 EAL: No shared files mode enabled, IPC is disabled 00:05:57.381 00:05:57.381 real 0m9.062s 00:05:57.381 user 0m8.049s 00:05:57.381 sys 0m0.848s 00:05:57.382 16:20:10 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:57.382 16:20:10 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:57.382 ************************************ 00:05:57.382 END TEST env_vtophys 00:05:57.382 ************************************ 00:05:57.641 16:20:10 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:57.641 16:20:10 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:57.641 16:20:10 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:57.641 16:20:10 env -- common/autotest_common.sh@10 -- # set +x 00:05:57.641 ************************************ 00:05:57.641 START TEST env_pci 00:05:57.641 ************************************ 00:05:57.641 16:20:10 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:57.641 00:05:57.641 00:05:57.641 CUnit - A unit testing framework for C - Version 2.1-3 00:05:57.641 http://cunit.sourceforge.net/ 00:05:57.641 00:05:57.641 00:05:57.641 Suite: pci 00:05:57.641 Test: pci_hook ...[2024-11-05 16:20:10.560211] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56907 has claimed it 00:05:57.641 EAL: Cannot find device (10000:00:01.0) 00:05:57.641 passed 00:05:57.641 00:05:57.641 Run Summary: Type Total Ran Passed Failed Inactive 00:05:57.641 suites 1 1 n/a 0 0 00:05:57.641 tests 1 1 1 0 0 00:05:57.641 asserts 25 25 25 0 n/a 00:05:57.641 00:05:57.641 Elapsed time = 0.005 seconds 00:05:57.641 EAL: Failed to attach device on primary process 00:05:57.641 00:05:57.641 real 0m0.085s 00:05:57.641 user 0m0.037s 00:05:57.641 sys 0m0.046s 00:05:57.641 16:20:10 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:57.641 16:20:10 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:57.641 ************************************ 00:05:57.641 END TEST env_pci 00:05:57.641 ************************************ 00:05:57.641 16:20:10 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:57.641 16:20:10 env -- env/env.sh@15 -- # uname 00:05:57.641 16:20:10 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:57.641 16:20:10 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:57.641 16:20:10 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:57.641 16:20:10 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:05:57.642 16:20:10 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:57.642 16:20:10 env -- common/autotest_common.sh@10 -- # set +x 00:05:57.642 ************************************ 00:05:57.642 START TEST env_dpdk_post_init 00:05:57.642 ************************************ 00:05:57.642 16:20:10 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:57.642 EAL: Detected CPU lcores: 10 00:05:57.642 EAL: Detected NUMA nodes: 1 00:05:57.642 EAL: Detected shared linkage of DPDK 00:05:57.901 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:57.901 EAL: Selected IOVA mode 'PA' 00:05:57.901 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:57.901 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:57.901 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:57.901 Starting DPDK initialization... 00:05:57.901 Starting SPDK post initialization... 00:05:57.901 SPDK NVMe probe 00:05:57.901 Attaching to 0000:00:10.0 00:05:57.901 Attaching to 0000:00:11.0 00:05:57.901 Attached to 0000:00:10.0 00:05:57.901 Attached to 0000:00:11.0 00:05:57.901 Cleaning up... 00:05:57.901 00:05:57.901 real 0m0.285s 00:05:57.901 user 0m0.090s 00:05:57.901 sys 0m0.096s 00:05:57.901 16:20:10 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:57.901 16:20:10 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:57.901 ************************************ 00:05:57.901 END TEST env_dpdk_post_init 00:05:57.901 ************************************ 00:05:58.161 16:20:11 env -- env/env.sh@26 -- # uname 00:05:58.161 16:20:11 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:58.161 16:20:11 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:58.161 16:20:11 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:58.161 16:20:11 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:58.161 16:20:11 env -- common/autotest_common.sh@10 -- # set +x 00:05:58.161 ************************************ 00:05:58.161 START TEST env_mem_callbacks 00:05:58.161 ************************************ 00:05:58.161 16:20:11 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:58.161 EAL: Detected CPU lcores: 10 00:05:58.161 EAL: Detected NUMA nodes: 1 00:05:58.161 EAL: Detected shared linkage of DPDK 00:05:58.161 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:58.161 EAL: Selected IOVA mode 'PA' 00:05:58.161 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:58.161 00:05:58.161 00:05:58.161 CUnit - A unit testing framework for C - Version 2.1-3 00:05:58.161 http://cunit.sourceforge.net/ 00:05:58.161 00:05:58.161 00:05:58.161 Suite: memory 00:05:58.161 Test: test ... 00:05:58.161 register 0x200000200000 2097152 00:05:58.161 malloc 3145728 00:05:58.161 register 0x200000400000 4194304 00:05:58.161 buf 0x2000004fffc0 len 3145728 PASSED 00:05:58.161 malloc 64 00:05:58.161 buf 0x2000004ffec0 len 64 PASSED 00:05:58.161 malloc 4194304 00:05:58.161 register 0x200000800000 6291456 00:05:58.161 buf 0x2000009fffc0 len 4194304 PASSED 00:05:58.161 free 0x2000004fffc0 3145728 00:05:58.161 free 0x2000004ffec0 64 00:05:58.161 unregister 0x200000400000 4194304 PASSED 00:05:58.161 free 0x2000009fffc0 4194304 00:05:58.161 unregister 0x200000800000 6291456 PASSED 00:05:58.161 malloc 8388608 00:05:58.161 register 0x200000400000 10485760 00:05:58.161 buf 0x2000005fffc0 len 8388608 PASSED 00:05:58.161 free 0x2000005fffc0 8388608 00:05:58.420 unregister 0x200000400000 10485760 PASSED 00:05:58.420 passed 00:05:58.420 00:05:58.420 Run Summary: Type Total Ran Passed Failed Inactive 00:05:58.420 suites 1 1 n/a 0 0 00:05:58.420 tests 1 1 1 0 0 00:05:58.420 asserts 15 15 15 0 n/a 00:05:58.420 00:05:58.420 Elapsed time = 0.088 seconds 00:05:58.420 00:05:58.420 real 0m0.274s 00:05:58.420 user 0m0.107s 00:05:58.420 sys 0m0.064s 00:05:58.420 16:20:11 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:58.420 16:20:11 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:58.420 ************************************ 00:05:58.420 END TEST env_mem_callbacks 00:05:58.420 ************************************ 00:05:58.420 00:05:58.420 real 0m10.574s 00:05:58.420 user 0m8.817s 00:05:58.420 sys 0m1.385s 00:05:58.420 16:20:11 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:58.420 16:20:11 env -- common/autotest_common.sh@10 -- # set +x 00:05:58.420 ************************************ 00:05:58.420 END TEST env 00:05:58.420 ************************************ 00:05:58.420 16:20:11 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:58.420 16:20:11 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:58.420 16:20:11 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:58.420 16:20:11 -- common/autotest_common.sh@10 -- # set +x 00:05:58.420 ************************************ 00:05:58.420 START TEST rpc 00:05:58.420 ************************************ 00:05:58.420 16:20:11 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:58.420 * Looking for test storage... 00:05:58.680 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:58.680 16:20:11 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:58.680 16:20:11 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:58.680 16:20:11 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:58.680 16:20:11 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:58.680 16:20:11 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.680 16:20:11 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.680 16:20:11 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.680 16:20:11 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.680 16:20:11 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.680 16:20:11 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.680 16:20:11 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.680 16:20:11 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.680 16:20:11 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.680 16:20:11 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.680 16:20:11 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.680 16:20:11 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:58.680 16:20:11 rpc -- scripts/common.sh@345 -- # : 1 00:05:58.680 16:20:11 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.680 16:20:11 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.680 16:20:11 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:58.680 16:20:11 rpc -- scripts/common.sh@353 -- # local d=1 00:05:58.680 16:20:11 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.680 16:20:11 rpc -- scripts/common.sh@355 -- # echo 1 00:05:58.680 16:20:11 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.680 16:20:11 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:58.680 16:20:11 rpc -- scripts/common.sh@353 -- # local d=2 00:05:58.680 16:20:11 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.680 16:20:11 rpc -- scripts/common.sh@355 -- # echo 2 00:05:58.680 16:20:11 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.680 16:20:11 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.680 16:20:11 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.680 16:20:11 rpc -- scripts/common.sh@368 -- # return 0 00:05:58.680 16:20:11 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.680 16:20:11 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:58.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.680 --rc genhtml_branch_coverage=1 00:05:58.680 --rc genhtml_function_coverage=1 00:05:58.680 --rc genhtml_legend=1 00:05:58.680 --rc geninfo_all_blocks=1 00:05:58.680 --rc geninfo_unexecuted_blocks=1 00:05:58.680 00:05:58.680 ' 00:05:58.680 16:20:11 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:58.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.680 --rc genhtml_branch_coverage=1 00:05:58.680 --rc genhtml_function_coverage=1 00:05:58.680 --rc genhtml_legend=1 00:05:58.680 --rc geninfo_all_blocks=1 00:05:58.680 --rc geninfo_unexecuted_blocks=1 00:05:58.680 00:05:58.680 ' 00:05:58.680 16:20:11 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:58.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.680 --rc genhtml_branch_coverage=1 00:05:58.680 --rc genhtml_function_coverage=1 00:05:58.680 --rc genhtml_legend=1 00:05:58.680 --rc geninfo_all_blocks=1 00:05:58.680 --rc geninfo_unexecuted_blocks=1 00:05:58.680 00:05:58.680 ' 00:05:58.680 16:20:11 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:58.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.680 --rc genhtml_branch_coverage=1 00:05:58.680 --rc genhtml_function_coverage=1 00:05:58.680 --rc genhtml_legend=1 00:05:58.680 --rc geninfo_all_blocks=1 00:05:58.680 --rc geninfo_unexecuted_blocks=1 00:05:58.680 00:05:58.680 ' 00:05:58.680 16:20:11 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57039 00:05:58.680 16:20:11 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:58.680 16:20:11 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:58.680 16:20:11 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57039 00:05:58.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.680 16:20:11 rpc -- common/autotest_common.sh@833 -- # '[' -z 57039 ']' 00:05:58.680 16:20:11 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.680 16:20:11 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:58.680 16:20:11 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.680 16:20:11 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:58.680 16:20:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.680 [2024-11-05 16:20:11.737590] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:05:58.680 [2024-11-05 16:20:11.737849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57039 ] 00:05:58.940 [2024-11-05 16:20:11.915083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.199 [2024-11-05 16:20:12.039163] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:59.199 [2024-11-05 16:20:12.039239] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57039' to capture a snapshot of events at runtime. 00:05:59.199 [2024-11-05 16:20:12.039249] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:59.199 [2024-11-05 16:20:12.039259] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:59.199 [2024-11-05 16:20:12.039267] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57039 for offline analysis/debug. 00:05:59.199 [2024-11-05 16:20:12.040454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.137 16:20:12 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:00.137 16:20:12 rpc -- common/autotest_common.sh@866 -- # return 0 00:06:00.137 16:20:12 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:00.137 16:20:12 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:00.137 16:20:12 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:00.137 16:20:12 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:00.137 16:20:12 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:00.137 16:20:12 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:00.137 16:20:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.137 ************************************ 00:06:00.137 START TEST rpc_integrity 00:06:00.137 ************************************ 00:06:00.137 16:20:12 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:06:00.137 16:20:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:00.137 16:20:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.137 16:20:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.137 16:20:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.137 16:20:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:00.137 16:20:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:00.137 16:20:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:00.137 16:20:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:00.137 16:20:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.137 16:20:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.137 16:20:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.137 16:20:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:00.137 16:20:13 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:00.137 16:20:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.137 16:20:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.137 16:20:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.137 16:20:13 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:00.137 { 00:06:00.137 "name": "Malloc0", 00:06:00.137 "aliases": [ 00:06:00.137 "acd61b82-0777-418f-a3c5-3ec552a6ebba" 00:06:00.137 ], 00:06:00.137 "product_name": "Malloc disk", 00:06:00.137 "block_size": 512, 00:06:00.137 "num_blocks": 16384, 00:06:00.137 "uuid": "acd61b82-0777-418f-a3c5-3ec552a6ebba", 00:06:00.137 "assigned_rate_limits": { 00:06:00.137 "rw_ios_per_sec": 0, 00:06:00.137 "rw_mbytes_per_sec": 0, 00:06:00.137 "r_mbytes_per_sec": 0, 00:06:00.137 "w_mbytes_per_sec": 0 00:06:00.137 }, 00:06:00.137 "claimed": false, 00:06:00.137 "zoned": false, 00:06:00.137 "supported_io_types": { 00:06:00.137 "read": true, 00:06:00.137 "write": true, 00:06:00.137 "unmap": true, 00:06:00.137 "flush": true, 00:06:00.137 "reset": true, 00:06:00.137 "nvme_admin": false, 00:06:00.137 "nvme_io": false, 00:06:00.137 "nvme_io_md": false, 00:06:00.137 "write_zeroes": true, 00:06:00.137 "zcopy": true, 00:06:00.137 "get_zone_info": false, 00:06:00.137 "zone_management": false, 00:06:00.137 "zone_append": false, 00:06:00.137 "compare": false, 00:06:00.137 "compare_and_write": false, 00:06:00.137 "abort": true, 00:06:00.137 "seek_hole": false, 00:06:00.137 "seek_data": false, 00:06:00.137 "copy": true, 00:06:00.137 "nvme_iov_md": false 00:06:00.137 }, 00:06:00.137 "memory_domains": [ 00:06:00.137 { 00:06:00.137 "dma_device_id": "system", 00:06:00.137 "dma_device_type": 1 00:06:00.137 }, 00:06:00.137 { 00:06:00.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:00.137 "dma_device_type": 2 00:06:00.137 } 00:06:00.137 ], 00:06:00.137 "driver_specific": {} 00:06:00.137 } 00:06:00.137 ]' 00:06:00.137 16:20:13 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:00.137 16:20:13 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:00.137 16:20:13 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:00.137 16:20:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.137 16:20:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.137 [2024-11-05 16:20:13.151951] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:00.137 [2024-11-05 16:20:13.152118] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:00.137 [2024-11-05 16:20:13.152154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:06:00.137 [2024-11-05 16:20:13.152171] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:00.137 [2024-11-05 16:20:13.154762] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:00.137 [2024-11-05 16:20:13.154815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:00.137 Passthru0 00:06:00.137 16:20:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.137 16:20:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:00.137 16:20:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.137 16:20:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.137 16:20:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.137 16:20:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:00.137 { 00:06:00.137 "name": "Malloc0", 00:06:00.137 "aliases": [ 00:06:00.137 "acd61b82-0777-418f-a3c5-3ec552a6ebba" 00:06:00.137 ], 00:06:00.137 "product_name": "Malloc disk", 00:06:00.137 "block_size": 512, 00:06:00.137 "num_blocks": 16384, 00:06:00.137 "uuid": "acd61b82-0777-418f-a3c5-3ec552a6ebba", 00:06:00.137 "assigned_rate_limits": { 00:06:00.137 "rw_ios_per_sec": 0, 00:06:00.137 "rw_mbytes_per_sec": 0, 00:06:00.137 "r_mbytes_per_sec": 0, 00:06:00.137 "w_mbytes_per_sec": 0 00:06:00.137 }, 00:06:00.137 "claimed": true, 00:06:00.137 "claim_type": "exclusive_write", 00:06:00.137 "zoned": false, 00:06:00.137 "supported_io_types": { 00:06:00.137 "read": true, 00:06:00.137 "write": true, 00:06:00.137 "unmap": true, 00:06:00.137 "flush": true, 00:06:00.137 "reset": true, 00:06:00.137 "nvme_admin": false, 00:06:00.137 "nvme_io": false, 00:06:00.137 "nvme_io_md": false, 00:06:00.137 "write_zeroes": true, 00:06:00.137 "zcopy": true, 00:06:00.137 "get_zone_info": false, 00:06:00.137 "zone_management": false, 00:06:00.137 "zone_append": false, 00:06:00.137 "compare": false, 00:06:00.137 "compare_and_write": false, 00:06:00.137 "abort": true, 00:06:00.137 "seek_hole": false, 00:06:00.137 "seek_data": false, 00:06:00.137 "copy": true, 00:06:00.137 "nvme_iov_md": false 00:06:00.137 }, 00:06:00.137 "memory_domains": [ 00:06:00.137 { 00:06:00.137 "dma_device_id": "system", 00:06:00.137 "dma_device_type": 1 00:06:00.137 }, 00:06:00.137 { 00:06:00.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:00.137 "dma_device_type": 2 00:06:00.137 } 00:06:00.137 ], 00:06:00.137 "driver_specific": {} 00:06:00.137 }, 00:06:00.137 { 00:06:00.137 "name": "Passthru0", 00:06:00.137 "aliases": [ 00:06:00.137 "73fb16b3-b40c-524f-8c36-fdd3213d5fc2" 00:06:00.137 ], 00:06:00.137 "product_name": "passthru", 00:06:00.137 "block_size": 512, 00:06:00.137 "num_blocks": 16384, 00:06:00.137 "uuid": "73fb16b3-b40c-524f-8c36-fdd3213d5fc2", 00:06:00.137 "assigned_rate_limits": { 00:06:00.137 "rw_ios_per_sec": 0, 00:06:00.137 "rw_mbytes_per_sec": 0, 00:06:00.137 "r_mbytes_per_sec": 0, 00:06:00.137 "w_mbytes_per_sec": 0 00:06:00.137 }, 00:06:00.137 "claimed": false, 00:06:00.137 "zoned": false, 00:06:00.137 "supported_io_types": { 00:06:00.137 "read": true, 00:06:00.137 "write": true, 00:06:00.137 "unmap": true, 00:06:00.137 "flush": true, 00:06:00.137 "reset": true, 00:06:00.137 "nvme_admin": false, 00:06:00.137 "nvme_io": false, 00:06:00.137 "nvme_io_md": false, 00:06:00.137 "write_zeroes": true, 00:06:00.137 "zcopy": true, 00:06:00.137 "get_zone_info": false, 00:06:00.137 "zone_management": false, 00:06:00.137 "zone_append": false, 00:06:00.137 "compare": false, 00:06:00.137 "compare_and_write": false, 00:06:00.137 "abort": true, 00:06:00.137 "seek_hole": false, 00:06:00.137 "seek_data": false, 00:06:00.137 "copy": true, 00:06:00.137 "nvme_iov_md": false 00:06:00.137 }, 00:06:00.137 "memory_domains": [ 00:06:00.137 { 00:06:00.137 "dma_device_id": "system", 00:06:00.137 "dma_device_type": 1 00:06:00.137 }, 00:06:00.137 { 00:06:00.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:00.137 "dma_device_type": 2 00:06:00.137 } 00:06:00.137 ], 00:06:00.137 "driver_specific": { 00:06:00.137 "passthru": { 00:06:00.137 "name": "Passthru0", 00:06:00.137 "base_bdev_name": "Malloc0" 00:06:00.137 } 00:06:00.137 } 00:06:00.137 } 00:06:00.137 ]' 00:06:00.137 16:20:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:00.397 16:20:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:00.397 16:20:13 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:00.397 16:20:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.397 16:20:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.397 16:20:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.397 16:20:13 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:00.397 16:20:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.397 16:20:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.397 16:20:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.397 16:20:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:00.397 16:20:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.397 16:20:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.397 16:20:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.397 16:20:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:00.397 16:20:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:00.397 16:20:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:00.397 00:06:00.397 real 0m0.377s 00:06:00.397 user 0m0.192s 00:06:00.397 sys 0m0.069s 00:06:00.397 ************************************ 00:06:00.397 END TEST rpc_integrity 00:06:00.397 ************************************ 00:06:00.397 16:20:13 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:00.397 16:20:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.397 16:20:13 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:00.397 16:20:13 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:00.397 16:20:13 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:00.397 16:20:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.397 ************************************ 00:06:00.397 START TEST rpc_plugins 00:06:00.397 ************************************ 00:06:00.397 16:20:13 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:06:00.397 16:20:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:00.397 16:20:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.397 16:20:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:00.397 16:20:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.397 16:20:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:00.397 16:20:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:00.397 16:20:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.397 16:20:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:00.397 16:20:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.397 16:20:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:00.397 { 00:06:00.397 "name": "Malloc1", 00:06:00.397 "aliases": [ 00:06:00.397 "22c8f23a-4627-4700-bb11-6485a22decfd" 00:06:00.397 ], 00:06:00.397 "product_name": "Malloc disk", 00:06:00.397 "block_size": 4096, 00:06:00.397 "num_blocks": 256, 00:06:00.397 "uuid": "22c8f23a-4627-4700-bb11-6485a22decfd", 00:06:00.397 "assigned_rate_limits": { 00:06:00.397 "rw_ios_per_sec": 0, 00:06:00.397 "rw_mbytes_per_sec": 0, 00:06:00.397 "r_mbytes_per_sec": 0, 00:06:00.397 "w_mbytes_per_sec": 0 00:06:00.397 }, 00:06:00.398 "claimed": false, 00:06:00.398 "zoned": false, 00:06:00.398 "supported_io_types": { 00:06:00.398 "read": true, 00:06:00.398 "write": true, 00:06:00.398 "unmap": true, 00:06:00.398 "flush": true, 00:06:00.398 "reset": true, 00:06:00.398 "nvme_admin": false, 00:06:00.398 "nvme_io": false, 00:06:00.398 "nvme_io_md": false, 00:06:00.398 "write_zeroes": true, 00:06:00.398 "zcopy": true, 00:06:00.398 "get_zone_info": false, 00:06:00.398 "zone_management": false, 00:06:00.398 "zone_append": false, 00:06:00.398 "compare": false, 00:06:00.398 "compare_and_write": false, 00:06:00.398 "abort": true, 00:06:00.398 "seek_hole": false, 00:06:00.398 "seek_data": false, 00:06:00.398 "copy": true, 00:06:00.398 "nvme_iov_md": false 00:06:00.398 }, 00:06:00.398 "memory_domains": [ 00:06:00.398 { 00:06:00.398 "dma_device_id": "system", 00:06:00.398 "dma_device_type": 1 00:06:00.398 }, 00:06:00.398 { 00:06:00.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:00.398 "dma_device_type": 2 00:06:00.398 } 00:06:00.398 ], 00:06:00.398 "driver_specific": {} 00:06:00.398 } 00:06:00.398 ]' 00:06:00.398 16:20:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:00.657 16:20:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:00.657 16:20:13 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:00.657 16:20:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.657 16:20:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:00.657 16:20:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.657 16:20:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:00.657 16:20:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.657 16:20:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:00.657 16:20:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.657 16:20:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:00.657 16:20:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:00.657 16:20:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:00.657 00:06:00.657 real 0m0.167s 00:06:00.657 user 0m0.099s 00:06:00.657 sys 0m0.021s 00:06:00.657 16:20:13 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:00.657 ************************************ 00:06:00.657 END TEST rpc_plugins 00:06:00.657 ************************************ 00:06:00.657 16:20:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:00.657 16:20:13 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:00.657 16:20:13 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:00.657 16:20:13 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:00.657 16:20:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.657 ************************************ 00:06:00.658 START TEST rpc_trace_cmd_test 00:06:00.658 ************************************ 00:06:00.658 16:20:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:06:00.658 16:20:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:00.658 16:20:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:00.658 16:20:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.658 16:20:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:00.658 16:20:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.658 16:20:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:00.658 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57039", 00:06:00.658 "tpoint_group_mask": "0x8", 00:06:00.658 "iscsi_conn": { 00:06:00.658 "mask": "0x2", 00:06:00.658 "tpoint_mask": "0x0" 00:06:00.658 }, 00:06:00.658 "scsi": { 00:06:00.658 "mask": "0x4", 00:06:00.658 "tpoint_mask": "0x0" 00:06:00.658 }, 00:06:00.658 "bdev": { 00:06:00.658 "mask": "0x8", 00:06:00.658 "tpoint_mask": "0xffffffffffffffff" 00:06:00.658 }, 00:06:00.658 "nvmf_rdma": { 00:06:00.658 "mask": "0x10", 00:06:00.658 "tpoint_mask": "0x0" 00:06:00.658 }, 00:06:00.658 "nvmf_tcp": { 00:06:00.658 "mask": "0x20", 00:06:00.658 "tpoint_mask": "0x0" 00:06:00.658 }, 00:06:00.658 "ftl": { 00:06:00.658 "mask": "0x40", 00:06:00.658 "tpoint_mask": "0x0" 00:06:00.658 }, 00:06:00.658 "blobfs": { 00:06:00.658 "mask": "0x80", 00:06:00.658 "tpoint_mask": "0x0" 00:06:00.658 }, 00:06:00.658 "dsa": { 00:06:00.658 "mask": "0x200", 00:06:00.658 "tpoint_mask": "0x0" 00:06:00.658 }, 00:06:00.658 "thread": { 00:06:00.658 "mask": "0x400", 00:06:00.658 "tpoint_mask": "0x0" 00:06:00.658 }, 00:06:00.658 "nvme_pcie": { 00:06:00.658 "mask": "0x800", 00:06:00.658 "tpoint_mask": "0x0" 00:06:00.658 }, 00:06:00.658 "iaa": { 00:06:00.658 "mask": "0x1000", 00:06:00.658 "tpoint_mask": "0x0" 00:06:00.658 }, 00:06:00.658 "nvme_tcp": { 00:06:00.658 "mask": "0x2000", 00:06:00.658 "tpoint_mask": "0x0" 00:06:00.658 }, 00:06:00.658 "bdev_nvme": { 00:06:00.658 "mask": "0x4000", 00:06:00.658 "tpoint_mask": "0x0" 00:06:00.658 }, 00:06:00.658 "sock": { 00:06:00.658 "mask": "0x8000", 00:06:00.658 "tpoint_mask": "0x0" 00:06:00.658 }, 00:06:00.658 "blob": { 00:06:00.658 "mask": "0x10000", 00:06:00.658 "tpoint_mask": "0x0" 00:06:00.658 }, 00:06:00.658 "bdev_raid": { 00:06:00.658 "mask": "0x20000", 00:06:00.658 "tpoint_mask": "0x0" 00:06:00.658 }, 00:06:00.658 "scheduler": { 00:06:00.658 "mask": "0x40000", 00:06:00.658 "tpoint_mask": "0x0" 00:06:00.658 } 00:06:00.658 }' 00:06:00.658 16:20:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:00.658 16:20:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:00.658 16:20:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:00.917 16:20:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:00.917 16:20:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:00.917 16:20:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:00.917 16:20:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:00.917 16:20:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:00.917 16:20:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:00.917 ************************************ 00:06:00.917 END TEST rpc_trace_cmd_test 00:06:00.917 ************************************ 00:06:00.917 16:20:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:00.917 00:06:00.917 real 0m0.255s 00:06:00.917 user 0m0.207s 00:06:00.917 sys 0m0.035s 00:06:00.917 16:20:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:00.917 16:20:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:00.917 16:20:13 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:00.917 16:20:13 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:00.917 16:20:13 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:00.917 16:20:13 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:00.917 16:20:13 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:00.917 16:20:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.917 ************************************ 00:06:00.917 START TEST rpc_daemon_integrity 00:06:00.917 ************************************ 00:06:00.917 16:20:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:06:00.917 16:20:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:00.917 16:20:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.917 16:20:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.917 16:20:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.917 16:20:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:00.917 16:20:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:01.176 16:20:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:01.176 16:20:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:01.176 16:20:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.176 16:20:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.176 16:20:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.176 16:20:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:01.176 16:20:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:01.176 16:20:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.176 16:20:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.176 16:20:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.176 16:20:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:01.176 { 00:06:01.176 "name": "Malloc2", 00:06:01.176 "aliases": [ 00:06:01.176 "b2530900-7012-4952-95cf-4a9edfce7b63" 00:06:01.176 ], 00:06:01.176 "product_name": "Malloc disk", 00:06:01.176 "block_size": 512, 00:06:01.176 "num_blocks": 16384, 00:06:01.176 "uuid": "b2530900-7012-4952-95cf-4a9edfce7b63", 00:06:01.176 "assigned_rate_limits": { 00:06:01.176 "rw_ios_per_sec": 0, 00:06:01.176 "rw_mbytes_per_sec": 0, 00:06:01.176 "r_mbytes_per_sec": 0, 00:06:01.176 "w_mbytes_per_sec": 0 00:06:01.176 }, 00:06:01.176 "claimed": false, 00:06:01.176 "zoned": false, 00:06:01.176 "supported_io_types": { 00:06:01.176 "read": true, 00:06:01.176 "write": true, 00:06:01.176 "unmap": true, 00:06:01.176 "flush": true, 00:06:01.176 "reset": true, 00:06:01.176 "nvme_admin": false, 00:06:01.176 "nvme_io": false, 00:06:01.176 "nvme_io_md": false, 00:06:01.176 "write_zeroes": true, 00:06:01.176 "zcopy": true, 00:06:01.176 "get_zone_info": false, 00:06:01.176 "zone_management": false, 00:06:01.176 "zone_append": false, 00:06:01.176 "compare": false, 00:06:01.176 "compare_and_write": false, 00:06:01.176 "abort": true, 00:06:01.176 "seek_hole": false, 00:06:01.176 "seek_data": false, 00:06:01.176 "copy": true, 00:06:01.176 "nvme_iov_md": false 00:06:01.176 }, 00:06:01.176 "memory_domains": [ 00:06:01.176 { 00:06:01.176 "dma_device_id": "system", 00:06:01.176 "dma_device_type": 1 00:06:01.176 }, 00:06:01.176 { 00:06:01.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:01.176 "dma_device_type": 2 00:06:01.176 } 00:06:01.176 ], 00:06:01.176 "driver_specific": {} 00:06:01.176 } 00:06:01.176 ]' 00:06:01.176 16:20:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:01.176 16:20:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:01.176 16:20:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:01.176 16:20:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.176 16:20:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.176 [2024-11-05 16:20:14.134784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:01.176 [2024-11-05 16:20:14.134932] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:01.176 [2024-11-05 16:20:14.134964] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:06:01.176 [2024-11-05 16:20:14.134977] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:01.176 [2024-11-05 16:20:14.137495] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:01.176 [2024-11-05 16:20:14.137550] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:01.176 Passthru0 00:06:01.176 16:20:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.176 16:20:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:01.176 16:20:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.176 16:20:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.176 16:20:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.176 16:20:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:01.176 { 00:06:01.176 "name": "Malloc2", 00:06:01.176 "aliases": [ 00:06:01.176 "b2530900-7012-4952-95cf-4a9edfce7b63" 00:06:01.176 ], 00:06:01.176 "product_name": "Malloc disk", 00:06:01.176 "block_size": 512, 00:06:01.176 "num_blocks": 16384, 00:06:01.176 "uuid": "b2530900-7012-4952-95cf-4a9edfce7b63", 00:06:01.176 "assigned_rate_limits": { 00:06:01.176 "rw_ios_per_sec": 0, 00:06:01.176 "rw_mbytes_per_sec": 0, 00:06:01.176 "r_mbytes_per_sec": 0, 00:06:01.176 "w_mbytes_per_sec": 0 00:06:01.176 }, 00:06:01.176 "claimed": true, 00:06:01.176 "claim_type": "exclusive_write", 00:06:01.176 "zoned": false, 00:06:01.176 "supported_io_types": { 00:06:01.176 "read": true, 00:06:01.176 "write": true, 00:06:01.176 "unmap": true, 00:06:01.176 "flush": true, 00:06:01.176 "reset": true, 00:06:01.176 "nvme_admin": false, 00:06:01.176 "nvme_io": false, 00:06:01.176 "nvme_io_md": false, 00:06:01.176 "write_zeroes": true, 00:06:01.176 "zcopy": true, 00:06:01.176 "get_zone_info": false, 00:06:01.176 "zone_management": false, 00:06:01.176 "zone_append": false, 00:06:01.176 "compare": false, 00:06:01.176 "compare_and_write": false, 00:06:01.176 "abort": true, 00:06:01.176 "seek_hole": false, 00:06:01.176 "seek_data": false, 00:06:01.176 "copy": true, 00:06:01.176 "nvme_iov_md": false 00:06:01.176 }, 00:06:01.176 "memory_domains": [ 00:06:01.176 { 00:06:01.176 "dma_device_id": "system", 00:06:01.176 "dma_device_type": 1 00:06:01.176 }, 00:06:01.176 { 00:06:01.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:01.177 "dma_device_type": 2 00:06:01.177 } 00:06:01.177 ], 00:06:01.177 "driver_specific": {} 00:06:01.177 }, 00:06:01.177 { 00:06:01.177 "name": "Passthru0", 00:06:01.177 "aliases": [ 00:06:01.177 "1151eb43-62a1-5de4-ab59-11aae78fb2bb" 00:06:01.177 ], 00:06:01.177 "product_name": "passthru", 00:06:01.177 "block_size": 512, 00:06:01.177 "num_blocks": 16384, 00:06:01.177 "uuid": "1151eb43-62a1-5de4-ab59-11aae78fb2bb", 00:06:01.177 "assigned_rate_limits": { 00:06:01.177 "rw_ios_per_sec": 0, 00:06:01.177 "rw_mbytes_per_sec": 0, 00:06:01.177 "r_mbytes_per_sec": 0, 00:06:01.177 "w_mbytes_per_sec": 0 00:06:01.177 }, 00:06:01.177 "claimed": false, 00:06:01.177 "zoned": false, 00:06:01.177 "supported_io_types": { 00:06:01.177 "read": true, 00:06:01.177 "write": true, 00:06:01.177 "unmap": true, 00:06:01.177 "flush": true, 00:06:01.177 "reset": true, 00:06:01.177 "nvme_admin": false, 00:06:01.177 "nvme_io": false, 00:06:01.177 "nvme_io_md": false, 00:06:01.177 "write_zeroes": true, 00:06:01.177 "zcopy": true, 00:06:01.177 "get_zone_info": false, 00:06:01.177 "zone_management": false, 00:06:01.177 "zone_append": false, 00:06:01.177 "compare": false, 00:06:01.177 "compare_and_write": false, 00:06:01.177 "abort": true, 00:06:01.177 "seek_hole": false, 00:06:01.177 "seek_data": false, 00:06:01.177 "copy": true, 00:06:01.177 "nvme_iov_md": false 00:06:01.177 }, 00:06:01.177 "memory_domains": [ 00:06:01.177 { 00:06:01.177 "dma_device_id": "system", 00:06:01.177 "dma_device_type": 1 00:06:01.177 }, 00:06:01.177 { 00:06:01.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:01.177 "dma_device_type": 2 00:06:01.177 } 00:06:01.177 ], 00:06:01.177 "driver_specific": { 00:06:01.177 "passthru": { 00:06:01.177 "name": "Passthru0", 00:06:01.177 "base_bdev_name": "Malloc2" 00:06:01.177 } 00:06:01.177 } 00:06:01.177 } 00:06:01.177 ]' 00:06:01.177 16:20:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:01.177 16:20:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:01.177 16:20:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:01.177 16:20:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.177 16:20:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.177 16:20:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.177 16:20:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:01.177 16:20:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.177 16:20:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.177 16:20:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.436 16:20:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:01.436 16:20:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.436 16:20:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.436 16:20:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.436 16:20:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:01.436 16:20:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:01.436 16:20:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:01.436 00:06:01.436 real 0m0.361s 00:06:01.436 user 0m0.191s 00:06:01.436 sys 0m0.060s 00:06:01.436 16:20:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:01.436 16:20:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.436 ************************************ 00:06:01.436 END TEST rpc_daemon_integrity 00:06:01.436 ************************************ 00:06:01.436 16:20:14 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:01.436 16:20:14 rpc -- rpc/rpc.sh@84 -- # killprocess 57039 00:06:01.436 16:20:14 rpc -- common/autotest_common.sh@952 -- # '[' -z 57039 ']' 00:06:01.436 16:20:14 rpc -- common/autotest_common.sh@956 -- # kill -0 57039 00:06:01.436 16:20:14 rpc -- common/autotest_common.sh@957 -- # uname 00:06:01.436 16:20:14 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:01.436 16:20:14 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57039 00:06:01.436 killing process with pid 57039 00:06:01.436 16:20:14 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:01.436 16:20:14 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:01.436 16:20:14 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57039' 00:06:01.436 16:20:14 rpc -- common/autotest_common.sh@971 -- # kill 57039 00:06:01.436 16:20:14 rpc -- common/autotest_common.sh@976 -- # wait 57039 00:06:03.971 00:06:03.971 real 0m5.492s 00:06:03.971 user 0m6.047s 00:06:03.971 sys 0m0.967s 00:06:03.971 ************************************ 00:06:03.971 END TEST rpc 00:06:03.971 ************************************ 00:06:03.971 16:20:16 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:03.971 16:20:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.971 16:20:16 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:03.971 16:20:16 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:03.971 16:20:16 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:03.971 16:20:16 -- common/autotest_common.sh@10 -- # set +x 00:06:03.971 ************************************ 00:06:03.971 START TEST skip_rpc 00:06:03.971 ************************************ 00:06:03.971 16:20:16 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:04.231 * Looking for test storage... 00:06:04.231 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:04.231 16:20:17 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:04.231 16:20:17 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:04.231 16:20:17 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:04.231 16:20:17 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:04.231 16:20:17 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.231 16:20:17 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.231 16:20:17 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.231 16:20:17 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.232 16:20:17 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.232 16:20:17 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.232 16:20:17 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.232 16:20:17 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.232 16:20:17 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.232 16:20:17 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.232 16:20:17 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.232 16:20:17 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:04.232 16:20:17 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:04.232 16:20:17 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.232 16:20:17 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.232 16:20:17 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:04.232 16:20:17 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:04.232 16:20:17 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.232 16:20:17 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:04.232 16:20:17 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.232 16:20:17 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:04.232 16:20:17 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:04.232 16:20:17 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.232 16:20:17 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:04.232 16:20:17 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.232 16:20:17 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.232 16:20:17 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.232 16:20:17 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:04.232 16:20:17 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.232 16:20:17 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:04.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.232 --rc genhtml_branch_coverage=1 00:06:04.232 --rc genhtml_function_coverage=1 00:06:04.232 --rc genhtml_legend=1 00:06:04.232 --rc geninfo_all_blocks=1 00:06:04.232 --rc geninfo_unexecuted_blocks=1 00:06:04.232 00:06:04.232 ' 00:06:04.232 16:20:17 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:04.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.232 --rc genhtml_branch_coverage=1 00:06:04.232 --rc genhtml_function_coverage=1 00:06:04.232 --rc genhtml_legend=1 00:06:04.232 --rc geninfo_all_blocks=1 00:06:04.232 --rc geninfo_unexecuted_blocks=1 00:06:04.232 00:06:04.232 ' 00:06:04.232 16:20:17 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:04.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.232 --rc genhtml_branch_coverage=1 00:06:04.232 --rc genhtml_function_coverage=1 00:06:04.232 --rc genhtml_legend=1 00:06:04.232 --rc geninfo_all_blocks=1 00:06:04.232 --rc geninfo_unexecuted_blocks=1 00:06:04.232 00:06:04.232 ' 00:06:04.232 16:20:17 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:04.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.232 --rc genhtml_branch_coverage=1 00:06:04.232 --rc genhtml_function_coverage=1 00:06:04.232 --rc genhtml_legend=1 00:06:04.232 --rc geninfo_all_blocks=1 00:06:04.232 --rc geninfo_unexecuted_blocks=1 00:06:04.232 00:06:04.232 ' 00:06:04.232 16:20:17 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:04.232 16:20:17 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:04.232 16:20:17 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:04.232 16:20:17 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:04.232 16:20:17 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:04.232 16:20:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.232 ************************************ 00:06:04.232 START TEST skip_rpc 00:06:04.232 ************************************ 00:06:04.232 16:20:17 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:06:04.232 16:20:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57274 00:06:04.232 16:20:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:04.232 16:20:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:04.232 16:20:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:04.232 [2024-11-05 16:20:17.308401] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:06:04.232 [2024-11-05 16:20:17.308655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57274 ] 00:06:04.492 [2024-11-05 16:20:17.485516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.750 [2024-11-05 16:20:17.608609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.072 16:20:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:10.072 16:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:10.072 16:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:10.072 16:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:10.072 16:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.072 16:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:10.072 16:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.072 16:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:10.072 16:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.072 16:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.072 16:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:10.072 16:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:10.072 16:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:10.072 16:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:10.072 16:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:10.072 16:20:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:10.072 16:20:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57274 00:06:10.072 16:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 57274 ']' 00:06:10.072 16:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 57274 00:06:10.072 16:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:06:10.072 16:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:10.072 16:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57274 00:06:10.072 killing process with pid 57274 00:06:10.072 16:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:10.072 16:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:10.072 16:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57274' 00:06:10.072 16:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 57274 00:06:10.072 16:20:22 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 57274 00:06:11.975 00:06:11.975 real 0m7.636s 00:06:11.975 user 0m7.132s 00:06:11.975 sys 0m0.409s 00:06:11.975 ************************************ 00:06:11.975 END TEST skip_rpc 00:06:11.975 ************************************ 00:06:11.975 16:20:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:11.975 16:20:24 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.975 16:20:24 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:11.975 16:20:24 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:11.975 16:20:24 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:11.975 16:20:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.975 ************************************ 00:06:11.975 START TEST skip_rpc_with_json 00:06:11.975 ************************************ 00:06:11.975 16:20:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:06:11.975 16:20:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:11.975 16:20:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57378 00:06:11.975 16:20:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:11.975 16:20:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:11.975 16:20:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57378 00:06:11.975 16:20:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 57378 ']' 00:06:11.975 16:20:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.975 16:20:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:11.975 16:20:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.975 16:20:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:11.975 16:20:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:11.975 [2024-11-05 16:20:25.009321] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:06:11.975 [2024-11-05 16:20:25.009570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57378 ] 00:06:12.236 [2024-11-05 16:20:25.206027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.495 [2024-11-05 16:20:25.342045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.433 16:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:13.433 16:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:06:13.433 16:20:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:13.433 16:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.433 16:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:13.433 [2024-11-05 16:20:26.347042] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:13.433 request: 00:06:13.433 { 00:06:13.433 "trtype": "tcp", 00:06:13.433 "method": "nvmf_get_transports", 00:06:13.433 "req_id": 1 00:06:13.433 } 00:06:13.433 Got JSON-RPC error response 00:06:13.433 response: 00:06:13.433 { 00:06:13.433 "code": -19, 00:06:13.433 "message": "No such device" 00:06:13.433 } 00:06:13.433 16:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:13.433 16:20:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:13.433 16:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.433 16:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:13.433 [2024-11-05 16:20:26.359178] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:13.433 16:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.433 16:20:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:13.433 16:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.433 16:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:13.694 16:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.694 16:20:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:13.694 { 00:06:13.694 "subsystems": [ 00:06:13.694 { 00:06:13.694 "subsystem": "fsdev", 00:06:13.694 "config": [ 00:06:13.694 { 00:06:13.694 "method": "fsdev_set_opts", 00:06:13.694 "params": { 00:06:13.694 "fsdev_io_pool_size": 65535, 00:06:13.694 "fsdev_io_cache_size": 256 00:06:13.694 } 00:06:13.694 } 00:06:13.694 ] 00:06:13.694 }, 00:06:13.694 { 00:06:13.694 "subsystem": "keyring", 00:06:13.694 "config": [] 00:06:13.694 }, 00:06:13.694 { 00:06:13.694 "subsystem": "iobuf", 00:06:13.694 "config": [ 00:06:13.694 { 00:06:13.694 "method": "iobuf_set_options", 00:06:13.694 "params": { 00:06:13.694 "small_pool_count": 8192, 00:06:13.694 "large_pool_count": 1024, 00:06:13.694 "small_bufsize": 8192, 00:06:13.694 "large_bufsize": 135168, 00:06:13.694 "enable_numa": false 00:06:13.694 } 00:06:13.694 } 00:06:13.694 ] 00:06:13.694 }, 00:06:13.694 { 00:06:13.694 "subsystem": "sock", 00:06:13.694 "config": [ 00:06:13.694 { 00:06:13.694 "method": "sock_set_default_impl", 00:06:13.694 "params": { 00:06:13.694 "impl_name": "posix" 00:06:13.694 } 00:06:13.694 }, 00:06:13.694 { 00:06:13.694 "method": "sock_impl_set_options", 00:06:13.694 "params": { 00:06:13.694 "impl_name": "ssl", 00:06:13.694 "recv_buf_size": 4096, 00:06:13.694 "send_buf_size": 4096, 00:06:13.694 "enable_recv_pipe": true, 00:06:13.694 "enable_quickack": false, 00:06:13.694 "enable_placement_id": 0, 00:06:13.694 "enable_zerocopy_send_server": true, 00:06:13.694 "enable_zerocopy_send_client": false, 00:06:13.694 "zerocopy_threshold": 0, 00:06:13.694 "tls_version": 0, 00:06:13.694 "enable_ktls": false 00:06:13.694 } 00:06:13.694 }, 00:06:13.694 { 00:06:13.694 "method": "sock_impl_set_options", 00:06:13.694 "params": { 00:06:13.694 "impl_name": "posix", 00:06:13.694 "recv_buf_size": 2097152, 00:06:13.694 "send_buf_size": 2097152, 00:06:13.694 "enable_recv_pipe": true, 00:06:13.694 "enable_quickack": false, 00:06:13.694 "enable_placement_id": 0, 00:06:13.694 "enable_zerocopy_send_server": true, 00:06:13.694 "enable_zerocopy_send_client": false, 00:06:13.694 "zerocopy_threshold": 0, 00:06:13.694 "tls_version": 0, 00:06:13.694 "enable_ktls": false 00:06:13.694 } 00:06:13.694 } 00:06:13.694 ] 00:06:13.694 }, 00:06:13.694 { 00:06:13.694 "subsystem": "vmd", 00:06:13.694 "config": [] 00:06:13.694 }, 00:06:13.694 { 00:06:13.694 "subsystem": "accel", 00:06:13.694 "config": [ 00:06:13.694 { 00:06:13.694 "method": "accel_set_options", 00:06:13.694 "params": { 00:06:13.694 "small_cache_size": 128, 00:06:13.694 "large_cache_size": 16, 00:06:13.694 "task_count": 2048, 00:06:13.694 "sequence_count": 2048, 00:06:13.694 "buf_count": 2048 00:06:13.694 } 00:06:13.694 } 00:06:13.694 ] 00:06:13.694 }, 00:06:13.694 { 00:06:13.694 "subsystem": "bdev", 00:06:13.694 "config": [ 00:06:13.694 { 00:06:13.694 "method": "bdev_set_options", 00:06:13.694 "params": { 00:06:13.694 "bdev_io_pool_size": 65535, 00:06:13.694 "bdev_io_cache_size": 256, 00:06:13.694 "bdev_auto_examine": true, 00:06:13.694 "iobuf_small_cache_size": 128, 00:06:13.694 "iobuf_large_cache_size": 16 00:06:13.694 } 00:06:13.694 }, 00:06:13.694 { 00:06:13.694 "method": "bdev_raid_set_options", 00:06:13.694 "params": { 00:06:13.694 "process_window_size_kb": 1024, 00:06:13.694 "process_max_bandwidth_mb_sec": 0 00:06:13.694 } 00:06:13.694 }, 00:06:13.694 { 00:06:13.694 "method": "bdev_iscsi_set_options", 00:06:13.694 "params": { 00:06:13.694 "timeout_sec": 30 00:06:13.694 } 00:06:13.694 }, 00:06:13.694 { 00:06:13.694 "method": "bdev_nvme_set_options", 00:06:13.694 "params": { 00:06:13.694 "action_on_timeout": "none", 00:06:13.694 "timeout_us": 0, 00:06:13.694 "timeout_admin_us": 0, 00:06:13.694 "keep_alive_timeout_ms": 10000, 00:06:13.694 "arbitration_burst": 0, 00:06:13.694 "low_priority_weight": 0, 00:06:13.694 "medium_priority_weight": 0, 00:06:13.694 "high_priority_weight": 0, 00:06:13.694 "nvme_adminq_poll_period_us": 10000, 00:06:13.694 "nvme_ioq_poll_period_us": 0, 00:06:13.694 "io_queue_requests": 0, 00:06:13.694 "delay_cmd_submit": true, 00:06:13.694 "transport_retry_count": 4, 00:06:13.694 "bdev_retry_count": 3, 00:06:13.694 "transport_ack_timeout": 0, 00:06:13.694 "ctrlr_loss_timeout_sec": 0, 00:06:13.694 "reconnect_delay_sec": 0, 00:06:13.694 "fast_io_fail_timeout_sec": 0, 00:06:13.694 "disable_auto_failback": false, 00:06:13.694 "generate_uuids": false, 00:06:13.694 "transport_tos": 0, 00:06:13.694 "nvme_error_stat": false, 00:06:13.694 "rdma_srq_size": 0, 00:06:13.694 "io_path_stat": false, 00:06:13.694 "allow_accel_sequence": false, 00:06:13.694 "rdma_max_cq_size": 0, 00:06:13.694 "rdma_cm_event_timeout_ms": 0, 00:06:13.694 "dhchap_digests": [ 00:06:13.694 "sha256", 00:06:13.694 "sha384", 00:06:13.694 "sha512" 00:06:13.694 ], 00:06:13.694 "dhchap_dhgroups": [ 00:06:13.694 "null", 00:06:13.694 "ffdhe2048", 00:06:13.694 "ffdhe3072", 00:06:13.694 "ffdhe4096", 00:06:13.694 "ffdhe6144", 00:06:13.694 "ffdhe8192" 00:06:13.694 ] 00:06:13.694 } 00:06:13.694 }, 00:06:13.694 { 00:06:13.694 "method": "bdev_nvme_set_hotplug", 00:06:13.694 "params": { 00:06:13.694 "period_us": 100000, 00:06:13.694 "enable": false 00:06:13.694 } 00:06:13.694 }, 00:06:13.694 { 00:06:13.694 "method": "bdev_wait_for_examine" 00:06:13.694 } 00:06:13.694 ] 00:06:13.694 }, 00:06:13.694 { 00:06:13.694 "subsystem": "scsi", 00:06:13.694 "config": null 00:06:13.694 }, 00:06:13.694 { 00:06:13.694 "subsystem": "scheduler", 00:06:13.695 "config": [ 00:06:13.695 { 00:06:13.695 "method": "framework_set_scheduler", 00:06:13.695 "params": { 00:06:13.695 "name": "static" 00:06:13.695 } 00:06:13.695 } 00:06:13.695 ] 00:06:13.695 }, 00:06:13.695 { 00:06:13.695 "subsystem": "vhost_scsi", 00:06:13.695 "config": [] 00:06:13.695 }, 00:06:13.695 { 00:06:13.695 "subsystem": "vhost_blk", 00:06:13.695 "config": [] 00:06:13.695 }, 00:06:13.695 { 00:06:13.695 "subsystem": "ublk", 00:06:13.695 "config": [] 00:06:13.695 }, 00:06:13.695 { 00:06:13.695 "subsystem": "nbd", 00:06:13.695 "config": [] 00:06:13.695 }, 00:06:13.695 { 00:06:13.695 "subsystem": "nvmf", 00:06:13.695 "config": [ 00:06:13.695 { 00:06:13.695 "method": "nvmf_set_config", 00:06:13.695 "params": { 00:06:13.695 "discovery_filter": "match_any", 00:06:13.695 "admin_cmd_passthru": { 00:06:13.695 "identify_ctrlr": false 00:06:13.695 }, 00:06:13.695 "dhchap_digests": [ 00:06:13.695 "sha256", 00:06:13.695 "sha384", 00:06:13.695 "sha512" 00:06:13.695 ], 00:06:13.695 "dhchap_dhgroups": [ 00:06:13.695 "null", 00:06:13.695 "ffdhe2048", 00:06:13.695 "ffdhe3072", 00:06:13.695 "ffdhe4096", 00:06:13.695 "ffdhe6144", 00:06:13.695 "ffdhe8192" 00:06:13.695 ] 00:06:13.695 } 00:06:13.695 }, 00:06:13.695 { 00:06:13.695 "method": "nvmf_set_max_subsystems", 00:06:13.695 "params": { 00:06:13.695 "max_subsystems": 1024 00:06:13.695 } 00:06:13.695 }, 00:06:13.695 { 00:06:13.695 "method": "nvmf_set_crdt", 00:06:13.695 "params": { 00:06:13.695 "crdt1": 0, 00:06:13.695 "crdt2": 0, 00:06:13.695 "crdt3": 0 00:06:13.695 } 00:06:13.695 }, 00:06:13.695 { 00:06:13.695 "method": "nvmf_create_transport", 00:06:13.695 "params": { 00:06:13.695 "trtype": "TCP", 00:06:13.695 "max_queue_depth": 128, 00:06:13.695 "max_io_qpairs_per_ctrlr": 127, 00:06:13.695 "in_capsule_data_size": 4096, 00:06:13.695 "max_io_size": 131072, 00:06:13.695 "io_unit_size": 131072, 00:06:13.695 "max_aq_depth": 128, 00:06:13.695 "num_shared_buffers": 511, 00:06:13.695 "buf_cache_size": 4294967295, 00:06:13.695 "dif_insert_or_strip": false, 00:06:13.695 "zcopy": false, 00:06:13.695 "c2h_success": true, 00:06:13.695 "sock_priority": 0, 00:06:13.695 "abort_timeout_sec": 1, 00:06:13.695 "ack_timeout": 0, 00:06:13.695 "data_wr_pool_size": 0 00:06:13.695 } 00:06:13.695 } 00:06:13.695 ] 00:06:13.695 }, 00:06:13.695 { 00:06:13.695 "subsystem": "iscsi", 00:06:13.695 "config": [ 00:06:13.695 { 00:06:13.695 "method": "iscsi_set_options", 00:06:13.695 "params": { 00:06:13.695 "node_base": "iqn.2016-06.io.spdk", 00:06:13.695 "max_sessions": 128, 00:06:13.695 "max_connections_per_session": 2, 00:06:13.695 "max_queue_depth": 64, 00:06:13.695 "default_time2wait": 2, 00:06:13.695 "default_time2retain": 20, 00:06:13.695 "first_burst_length": 8192, 00:06:13.695 "immediate_data": true, 00:06:13.695 "allow_duplicated_isid": false, 00:06:13.695 "error_recovery_level": 0, 00:06:13.695 "nop_timeout": 60, 00:06:13.695 "nop_in_interval": 30, 00:06:13.695 "disable_chap": false, 00:06:13.695 "require_chap": false, 00:06:13.695 "mutual_chap": false, 00:06:13.695 "chap_group": 0, 00:06:13.695 "max_large_datain_per_connection": 64, 00:06:13.695 "max_r2t_per_connection": 4, 00:06:13.695 "pdu_pool_size": 36864, 00:06:13.695 "immediate_data_pool_size": 16384, 00:06:13.695 "data_out_pool_size": 2048 00:06:13.695 } 00:06:13.695 } 00:06:13.695 ] 00:06:13.695 } 00:06:13.695 ] 00:06:13.695 } 00:06:13.695 16:20:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:13.695 16:20:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57378 00:06:13.695 16:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57378 ']' 00:06:13.695 16:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57378 00:06:13.695 16:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:06:13.695 16:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:13.695 16:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57378 00:06:13.695 killing process with pid 57378 00:06:13.695 16:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:13.695 16:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:13.695 16:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57378' 00:06:13.695 16:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57378 00:06:13.695 16:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57378 00:06:16.225 16:20:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57434 00:06:16.225 16:20:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:16.225 16:20:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:21.511 16:20:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57434 00:06:21.511 16:20:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57434 ']' 00:06:21.511 16:20:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57434 00:06:21.511 16:20:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:06:21.511 16:20:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:21.511 16:20:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57434 00:06:21.511 killing process with pid 57434 00:06:21.511 16:20:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:21.511 16:20:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:21.511 16:20:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57434' 00:06:21.511 16:20:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57434 00:06:21.511 16:20:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57434 00:06:24.045 16:20:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:24.045 16:20:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:24.045 ************************************ 00:06:24.045 END TEST skip_rpc_with_json 00:06:24.045 ************************************ 00:06:24.045 00:06:24.045 real 0m12.149s 00:06:24.045 user 0m11.690s 00:06:24.045 sys 0m0.874s 00:06:24.045 16:20:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:24.045 16:20:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:24.045 16:20:37 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:24.045 16:20:37 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:24.045 16:20:37 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:24.045 16:20:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.045 ************************************ 00:06:24.045 START TEST skip_rpc_with_delay 00:06:24.045 ************************************ 00:06:24.045 16:20:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:06:24.045 16:20:37 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:24.045 16:20:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:24.045 16:20:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:24.045 16:20:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:24.045 16:20:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.045 16:20:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:24.045 16:20:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.045 16:20:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:24.046 16:20:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.046 16:20:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:24.046 16:20:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:24.046 16:20:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:24.304 [2024-11-05 16:20:37.209750] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:24.304 16:20:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:24.304 16:20:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:24.304 16:20:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:24.304 16:20:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:24.304 00:06:24.304 real 0m0.177s 00:06:24.304 user 0m0.086s 00:06:24.304 sys 0m0.088s 00:06:24.304 16:20:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:24.304 ************************************ 00:06:24.304 END TEST skip_rpc_with_delay 00:06:24.304 ************************************ 00:06:24.304 16:20:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:24.304 16:20:37 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:24.304 16:20:37 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:24.304 16:20:37 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:24.304 16:20:37 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:24.304 16:20:37 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:24.304 16:20:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.304 ************************************ 00:06:24.304 START TEST exit_on_failed_rpc_init 00:06:24.304 ************************************ 00:06:24.304 16:20:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:06:24.304 16:20:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:24.304 16:20:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57573 00:06:24.304 16:20:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57573 00:06:24.304 16:20:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 57573 ']' 00:06:24.304 16:20:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.304 16:20:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:24.304 16:20:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.304 16:20:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:24.304 16:20:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:24.562 [2024-11-05 16:20:37.418652] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:06:24.562 [2024-11-05 16:20:37.418899] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57573 ] 00:06:24.562 [2024-11-05 16:20:37.584780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.820 [2024-11-05 16:20:37.723776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.755 16:20:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:25.755 16:20:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:06:25.755 16:20:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:25.755 16:20:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:25.755 16:20:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:25.755 16:20:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:25.755 16:20:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:25.755 16:20:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:25.756 16:20:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:25.756 16:20:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:25.756 16:20:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:25.756 16:20:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:25.756 16:20:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:25.756 16:20:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:25.756 16:20:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:26.014 [2024-11-05 16:20:38.876181] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:06:26.014 [2024-11-05 16:20:38.876477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57597 ] 00:06:26.014 [2024-11-05 16:20:39.056312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.271 [2024-11-05 16:20:39.202227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.271 [2024-11-05 16:20:39.202478] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:26.271 [2024-11-05 16:20:39.202589] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:26.271 [2024-11-05 16:20:39.202687] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:26.530 16:20:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:26.530 16:20:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:26.530 16:20:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:26.530 16:20:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:26.530 16:20:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:26.530 16:20:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:26.530 16:20:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:26.530 16:20:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57573 00:06:26.530 16:20:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 57573 ']' 00:06:26.530 16:20:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 57573 00:06:26.530 16:20:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:06:26.530 16:20:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:26.530 16:20:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57573 00:06:26.530 16:20:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:26.530 16:20:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:26.530 killing process with pid 57573 00:06:26.530 16:20:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57573' 00:06:26.530 16:20:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 57573 00:06:26.530 16:20:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 57573 00:06:29.821 00:06:29.821 real 0m5.132s 00:06:29.821 user 0m5.692s 00:06:29.821 sys 0m0.589s 00:06:29.821 16:20:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:29.821 16:20:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:29.821 ************************************ 00:06:29.821 END TEST exit_on_failed_rpc_init 00:06:29.821 ************************************ 00:06:29.821 16:20:42 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:29.821 00:06:29.821 real 0m25.532s 00:06:29.821 user 0m24.806s 00:06:29.821 sys 0m2.208s 00:06:29.821 16:20:42 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:29.821 ************************************ 00:06:29.821 END TEST skip_rpc 00:06:29.821 ************************************ 00:06:29.821 16:20:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.821 16:20:42 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:29.821 16:20:42 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:29.821 16:20:42 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:29.821 16:20:42 -- common/autotest_common.sh@10 -- # set +x 00:06:29.821 ************************************ 00:06:29.821 START TEST rpc_client 00:06:29.821 ************************************ 00:06:29.821 16:20:42 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:29.821 * Looking for test storage... 00:06:29.821 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:29.821 16:20:42 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:29.821 16:20:42 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:29.821 16:20:42 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:06:29.821 16:20:42 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:29.821 16:20:42 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.821 16:20:42 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.821 16:20:42 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.821 16:20:42 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.821 16:20:42 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.821 16:20:42 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.821 16:20:42 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.821 16:20:42 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.821 16:20:42 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.821 16:20:42 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.821 16:20:42 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.821 16:20:42 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:29.821 16:20:42 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:29.821 16:20:42 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.821 16:20:42 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.821 16:20:42 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:29.821 16:20:42 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:29.821 16:20:42 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.821 16:20:42 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:29.821 16:20:42 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.821 16:20:42 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:29.821 16:20:42 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:29.821 16:20:42 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.821 16:20:42 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:29.821 16:20:42 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.821 16:20:42 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.821 16:20:42 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.821 16:20:42 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:29.821 16:20:42 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.821 16:20:42 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:29.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.821 --rc genhtml_branch_coverage=1 00:06:29.821 --rc genhtml_function_coverage=1 00:06:29.821 --rc genhtml_legend=1 00:06:29.821 --rc geninfo_all_blocks=1 00:06:29.821 --rc geninfo_unexecuted_blocks=1 00:06:29.821 00:06:29.821 ' 00:06:29.821 16:20:42 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:29.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.821 --rc genhtml_branch_coverage=1 00:06:29.821 --rc genhtml_function_coverage=1 00:06:29.821 --rc genhtml_legend=1 00:06:29.821 --rc geninfo_all_blocks=1 00:06:29.821 --rc geninfo_unexecuted_blocks=1 00:06:29.821 00:06:29.821 ' 00:06:29.821 16:20:42 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:29.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.821 --rc genhtml_branch_coverage=1 00:06:29.821 --rc genhtml_function_coverage=1 00:06:29.821 --rc genhtml_legend=1 00:06:29.821 --rc geninfo_all_blocks=1 00:06:29.821 --rc geninfo_unexecuted_blocks=1 00:06:29.821 00:06:29.821 ' 00:06:29.821 16:20:42 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:29.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.821 --rc genhtml_branch_coverage=1 00:06:29.822 --rc genhtml_function_coverage=1 00:06:29.822 --rc genhtml_legend=1 00:06:29.822 --rc geninfo_all_blocks=1 00:06:29.822 --rc geninfo_unexecuted_blocks=1 00:06:29.822 00:06:29.822 ' 00:06:29.822 16:20:42 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:29.822 OK 00:06:29.822 16:20:42 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:29.822 00:06:29.822 real 0m0.263s 00:06:29.822 user 0m0.147s 00:06:29.822 sys 0m0.122s 00:06:29.822 16:20:42 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:29.822 16:20:42 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:29.822 ************************************ 00:06:29.822 END TEST rpc_client 00:06:29.822 ************************************ 00:06:29.822 16:20:42 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:29.822 16:20:42 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:29.822 16:20:42 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:29.822 16:20:42 -- common/autotest_common.sh@10 -- # set +x 00:06:29.822 ************************************ 00:06:29.822 START TEST json_config 00:06:29.822 ************************************ 00:06:29.822 16:20:42 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:30.081 16:20:42 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:30.081 16:20:42 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:06:30.081 16:20:42 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:30.081 16:20:43 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:30.081 16:20:43 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.081 16:20:43 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.081 16:20:43 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.081 16:20:43 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.081 16:20:43 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.081 16:20:43 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.081 16:20:43 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.081 16:20:43 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.081 16:20:43 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.081 16:20:43 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.081 16:20:43 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.081 16:20:43 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:30.081 16:20:43 json_config -- scripts/common.sh@345 -- # : 1 00:06:30.081 16:20:43 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.081 16:20:43 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.081 16:20:43 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:30.081 16:20:43 json_config -- scripts/common.sh@353 -- # local d=1 00:06:30.081 16:20:43 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.081 16:20:43 json_config -- scripts/common.sh@355 -- # echo 1 00:06:30.081 16:20:43 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.081 16:20:43 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:30.081 16:20:43 json_config -- scripts/common.sh@353 -- # local d=2 00:06:30.081 16:20:43 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.081 16:20:43 json_config -- scripts/common.sh@355 -- # echo 2 00:06:30.081 16:20:43 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.081 16:20:43 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.081 16:20:43 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.081 16:20:43 json_config -- scripts/common.sh@368 -- # return 0 00:06:30.081 16:20:43 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.081 16:20:43 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:30.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.081 --rc genhtml_branch_coverage=1 00:06:30.081 --rc genhtml_function_coverage=1 00:06:30.081 --rc genhtml_legend=1 00:06:30.081 --rc geninfo_all_blocks=1 00:06:30.081 --rc geninfo_unexecuted_blocks=1 00:06:30.081 00:06:30.081 ' 00:06:30.081 16:20:43 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:30.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.081 --rc genhtml_branch_coverage=1 00:06:30.081 --rc genhtml_function_coverage=1 00:06:30.081 --rc genhtml_legend=1 00:06:30.081 --rc geninfo_all_blocks=1 00:06:30.081 --rc geninfo_unexecuted_blocks=1 00:06:30.081 00:06:30.081 ' 00:06:30.081 16:20:43 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:30.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.081 --rc genhtml_branch_coverage=1 00:06:30.081 --rc genhtml_function_coverage=1 00:06:30.081 --rc genhtml_legend=1 00:06:30.081 --rc geninfo_all_blocks=1 00:06:30.081 --rc geninfo_unexecuted_blocks=1 00:06:30.081 00:06:30.081 ' 00:06:30.081 16:20:43 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:30.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.081 --rc genhtml_branch_coverage=1 00:06:30.081 --rc genhtml_function_coverage=1 00:06:30.081 --rc genhtml_legend=1 00:06:30.081 --rc geninfo_all_blocks=1 00:06:30.081 --rc geninfo_unexecuted_blocks=1 00:06:30.081 00:06:30.081 ' 00:06:30.081 16:20:43 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:30.081 16:20:43 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:30.081 16:20:43 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:30.081 16:20:43 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:30.081 16:20:43 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:30.081 16:20:43 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:30.081 16:20:43 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:30.081 16:20:43 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:30.081 16:20:43 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:30.081 16:20:43 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:30.081 16:20:43 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:30.081 16:20:43 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:30.081 16:20:43 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:28c63530-75cb-4ffa-be40-6c238887710c 00:06:30.081 16:20:43 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=28c63530-75cb-4ffa-be40-6c238887710c 00:06:30.082 16:20:43 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:30.082 16:20:43 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:30.082 16:20:43 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:30.082 16:20:43 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:30.082 16:20:43 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:30.082 16:20:43 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:30.082 16:20:43 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:30.082 16:20:43 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.082 16:20:43 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.082 16:20:43 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.082 16:20:43 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.082 16:20:43 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.082 16:20:43 json_config -- paths/export.sh@5 -- # export PATH 00:06:30.082 16:20:43 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.082 16:20:43 json_config -- nvmf/common.sh@51 -- # : 0 00:06:30.082 16:20:43 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:30.082 16:20:43 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:30.082 16:20:43 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:30.082 16:20:43 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:30.082 16:20:43 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:30.082 16:20:43 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:30.082 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:30.082 16:20:43 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:30.082 16:20:43 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:30.082 16:20:43 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:30.082 16:20:43 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:30.082 16:20:43 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:30.082 16:20:43 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:30.082 16:20:43 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:30.082 16:20:43 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:30.082 16:20:43 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:30.082 WARNING: No tests are enabled so not running JSON configuration tests 00:06:30.082 16:20:43 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:30.082 00:06:30.082 real 0m0.213s 00:06:30.082 user 0m0.145s 00:06:30.082 sys 0m0.074s 00:06:30.082 16:20:43 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:30.082 16:20:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.082 ************************************ 00:06:30.082 END TEST json_config 00:06:30.082 ************************************ 00:06:30.082 16:20:43 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:30.082 16:20:43 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:30.082 16:20:43 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:30.082 16:20:43 -- common/autotest_common.sh@10 -- # set +x 00:06:30.082 ************************************ 00:06:30.082 START TEST json_config_extra_key 00:06:30.082 ************************************ 00:06:30.082 16:20:43 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:30.341 16:20:43 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:30.341 16:20:43 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:06:30.341 16:20:43 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:30.341 16:20:43 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:30.341 16:20:43 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.341 16:20:43 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.341 16:20:43 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.341 16:20:43 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.341 16:20:43 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.341 16:20:43 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.341 16:20:43 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.341 16:20:43 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.341 16:20:43 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.341 16:20:43 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.341 16:20:43 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.341 16:20:43 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:30.341 16:20:43 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:30.341 16:20:43 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.341 16:20:43 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.341 16:20:43 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:30.341 16:20:43 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:30.341 16:20:43 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.341 16:20:43 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:30.341 16:20:43 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.341 16:20:43 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:30.341 16:20:43 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:30.341 16:20:43 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.341 16:20:43 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:30.341 16:20:43 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.341 16:20:43 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.341 16:20:43 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.341 16:20:43 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:30.341 16:20:43 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.341 16:20:43 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:30.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.341 --rc genhtml_branch_coverage=1 00:06:30.341 --rc genhtml_function_coverage=1 00:06:30.341 --rc genhtml_legend=1 00:06:30.341 --rc geninfo_all_blocks=1 00:06:30.341 --rc geninfo_unexecuted_blocks=1 00:06:30.341 00:06:30.341 ' 00:06:30.341 16:20:43 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:30.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.341 --rc genhtml_branch_coverage=1 00:06:30.341 --rc genhtml_function_coverage=1 00:06:30.341 --rc genhtml_legend=1 00:06:30.341 --rc geninfo_all_blocks=1 00:06:30.341 --rc geninfo_unexecuted_blocks=1 00:06:30.341 00:06:30.341 ' 00:06:30.341 16:20:43 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:30.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.341 --rc genhtml_branch_coverage=1 00:06:30.341 --rc genhtml_function_coverage=1 00:06:30.341 --rc genhtml_legend=1 00:06:30.341 --rc geninfo_all_blocks=1 00:06:30.341 --rc geninfo_unexecuted_blocks=1 00:06:30.342 00:06:30.342 ' 00:06:30.342 16:20:43 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:30.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.342 --rc genhtml_branch_coverage=1 00:06:30.342 --rc genhtml_function_coverage=1 00:06:30.342 --rc genhtml_legend=1 00:06:30.342 --rc geninfo_all_blocks=1 00:06:30.342 --rc geninfo_unexecuted_blocks=1 00:06:30.342 00:06:30.342 ' 00:06:30.342 16:20:43 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:30.342 16:20:43 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:30.342 16:20:43 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:30.342 16:20:43 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:30.342 16:20:43 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:30.342 16:20:43 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:30.342 16:20:43 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:30.342 16:20:43 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:30.342 16:20:43 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:30.342 16:20:43 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:30.342 16:20:43 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:30.342 16:20:43 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:30.342 16:20:43 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:28c63530-75cb-4ffa-be40-6c238887710c 00:06:30.342 16:20:43 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=28c63530-75cb-4ffa-be40-6c238887710c 00:06:30.342 16:20:43 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:30.342 16:20:43 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:30.342 16:20:43 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:30.342 16:20:43 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:30.342 16:20:43 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:30.342 16:20:43 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:30.342 16:20:43 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:30.342 16:20:43 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.342 16:20:43 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.342 16:20:43 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.342 16:20:43 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.342 16:20:43 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.342 16:20:43 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:30.342 16:20:43 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.342 16:20:43 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:30.342 16:20:43 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:30.342 16:20:43 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:30.342 16:20:43 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:30.342 16:20:43 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:30.342 16:20:43 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:30.342 16:20:43 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:30.342 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:30.342 16:20:43 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:30.342 16:20:43 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:30.342 16:20:43 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:30.342 16:20:43 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:30.342 16:20:43 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:30.342 16:20:43 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:30.342 16:20:43 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:30.342 16:20:43 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:30.342 16:20:43 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:30.342 16:20:43 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:30.342 16:20:43 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:30.342 16:20:43 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:30.342 16:20:43 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:30.342 16:20:43 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:30.342 INFO: launching applications... 00:06:30.342 16:20:43 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:30.342 16:20:43 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:30.342 16:20:43 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:30.342 16:20:43 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:30.342 16:20:43 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:30.342 16:20:43 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:30.342 16:20:43 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:30.342 16:20:43 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:30.342 16:20:43 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57812 00:06:30.342 16:20:43 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:30.342 16:20:43 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:30.342 Waiting for target to run... 00:06:30.342 16:20:43 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57812 /var/tmp/spdk_tgt.sock 00:06:30.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:30.343 16:20:43 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 57812 ']' 00:06:30.343 16:20:43 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:30.343 16:20:43 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:30.343 16:20:43 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:30.343 16:20:43 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:30.343 16:20:43 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:30.343 [2024-11-05 16:20:43.426932] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:06:30.343 [2024-11-05 16:20:43.427090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57812 ] 00:06:30.908 [2024-11-05 16:20:43.822320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.908 [2024-11-05 16:20:43.951490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.841 00:06:31.842 INFO: shutting down applications... 00:06:31.842 16:20:44 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:31.842 16:20:44 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:06:31.842 16:20:44 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:31.842 16:20:44 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:31.842 16:20:44 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:31.842 16:20:44 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:31.842 16:20:44 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:31.842 16:20:44 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57812 ]] 00:06:31.842 16:20:44 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57812 00:06:31.842 16:20:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:31.842 16:20:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:31.842 16:20:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57812 00:06:31.842 16:20:44 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:32.408 16:20:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:32.408 16:20:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:32.408 16:20:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57812 00:06:32.408 16:20:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:32.974 16:20:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:32.974 16:20:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:32.974 16:20:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57812 00:06:32.974 16:20:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:33.234 16:20:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:33.234 16:20:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:33.234 16:20:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57812 00:06:33.234 16:20:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:33.807 16:20:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:33.807 16:20:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:33.807 16:20:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57812 00:06:33.807 16:20:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:34.478 16:20:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:34.478 16:20:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:34.478 16:20:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57812 00:06:34.478 16:20:47 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:34.736 16:20:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:34.736 16:20:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:34.736 16:20:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57812 00:06:34.736 16:20:47 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:35.304 16:20:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:35.304 16:20:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:35.304 16:20:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57812 00:06:35.304 16:20:48 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:35.304 16:20:48 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:35.304 16:20:48 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:35.304 SPDK target shutdown done 00:06:35.304 Success 00:06:35.304 16:20:48 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:35.304 16:20:48 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:35.304 ************************************ 00:06:35.304 END TEST json_config_extra_key 00:06:35.304 ************************************ 00:06:35.304 00:06:35.304 real 0m5.152s 00:06:35.304 user 0m4.611s 00:06:35.304 sys 0m0.556s 00:06:35.304 16:20:48 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:35.304 16:20:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:35.304 16:20:48 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:35.304 16:20:48 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:35.304 16:20:48 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:35.304 16:20:48 -- common/autotest_common.sh@10 -- # set +x 00:06:35.304 ************************************ 00:06:35.304 START TEST alias_rpc 00:06:35.304 ************************************ 00:06:35.304 16:20:48 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:35.304 * Looking for test storage... 00:06:35.304 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:35.304 16:20:48 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:35.304 16:20:48 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:35.304 16:20:48 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:35.562 16:20:48 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:35.562 16:20:48 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.562 16:20:48 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.562 16:20:48 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.562 16:20:48 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.562 16:20:48 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.562 16:20:48 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.562 16:20:48 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.562 16:20:48 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.562 16:20:48 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.562 16:20:48 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.562 16:20:48 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.562 16:20:48 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:35.562 16:20:48 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:35.562 16:20:48 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.562 16:20:48 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.562 16:20:48 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:35.562 16:20:48 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:35.562 16:20:48 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.562 16:20:48 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:35.562 16:20:48 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.562 16:20:48 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:35.562 16:20:48 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:35.562 16:20:48 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.562 16:20:48 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:35.562 16:20:48 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.562 16:20:48 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.562 16:20:48 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.562 16:20:48 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:35.562 16:20:48 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.562 16:20:48 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:35.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.562 --rc genhtml_branch_coverage=1 00:06:35.562 --rc genhtml_function_coverage=1 00:06:35.562 --rc genhtml_legend=1 00:06:35.562 --rc geninfo_all_blocks=1 00:06:35.562 --rc geninfo_unexecuted_blocks=1 00:06:35.562 00:06:35.562 ' 00:06:35.562 16:20:48 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:35.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.562 --rc genhtml_branch_coverage=1 00:06:35.562 --rc genhtml_function_coverage=1 00:06:35.562 --rc genhtml_legend=1 00:06:35.562 --rc geninfo_all_blocks=1 00:06:35.562 --rc geninfo_unexecuted_blocks=1 00:06:35.562 00:06:35.562 ' 00:06:35.562 16:20:48 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:35.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.562 --rc genhtml_branch_coverage=1 00:06:35.562 --rc genhtml_function_coverage=1 00:06:35.562 --rc genhtml_legend=1 00:06:35.562 --rc geninfo_all_blocks=1 00:06:35.562 --rc geninfo_unexecuted_blocks=1 00:06:35.562 00:06:35.562 ' 00:06:35.562 16:20:48 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:35.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.562 --rc genhtml_branch_coverage=1 00:06:35.562 --rc genhtml_function_coverage=1 00:06:35.562 --rc genhtml_legend=1 00:06:35.562 --rc geninfo_all_blocks=1 00:06:35.562 --rc geninfo_unexecuted_blocks=1 00:06:35.562 00:06:35.562 ' 00:06:35.562 16:20:48 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:35.562 16:20:48 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:35.562 16:20:48 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57930 00:06:35.562 16:20:48 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57930 00:06:35.562 16:20:48 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 57930 ']' 00:06:35.562 16:20:48 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.562 16:20:48 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:35.562 16:20:48 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.562 16:20:48 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:35.562 16:20:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.562 [2024-11-05 16:20:48.605921] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:06:35.562 [2024-11-05 16:20:48.606171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57930 ] 00:06:35.822 [2024-11-05 16:20:48.785696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.080 [2024-11-05 16:20:48.926574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.012 16:20:49 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:37.012 16:20:49 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:37.012 16:20:49 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:37.271 16:20:50 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57930 00:06:37.271 16:20:50 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 57930 ']' 00:06:37.271 16:20:50 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 57930 00:06:37.271 16:20:50 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:06:37.271 16:20:50 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:37.271 16:20:50 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57930 00:06:37.271 16:20:50 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:37.271 16:20:50 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:37.271 16:20:50 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57930' 00:06:37.271 killing process with pid 57930 00:06:37.271 16:20:50 alias_rpc -- common/autotest_common.sh@971 -- # kill 57930 00:06:37.271 16:20:50 alias_rpc -- common/autotest_common.sh@976 -- # wait 57930 00:06:40.586 00:06:40.586 real 0m4.773s 00:06:40.586 user 0m4.941s 00:06:40.586 sys 0m0.568s 00:06:40.586 16:20:53 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:40.586 16:20:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.586 ************************************ 00:06:40.586 END TEST alias_rpc 00:06:40.586 ************************************ 00:06:40.586 16:20:53 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:40.586 16:20:53 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:40.586 16:20:53 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:40.586 16:20:53 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:40.586 16:20:53 -- common/autotest_common.sh@10 -- # set +x 00:06:40.586 ************************************ 00:06:40.586 START TEST spdkcli_tcp 00:06:40.586 ************************************ 00:06:40.586 16:20:53 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:40.586 * Looking for test storage... 00:06:40.586 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:40.586 16:20:53 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:40.586 16:20:53 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:06:40.586 16:20:53 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:40.586 16:20:53 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:40.586 16:20:53 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.586 16:20:53 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.586 16:20:53 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.586 16:20:53 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.586 16:20:53 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.586 16:20:53 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.586 16:20:53 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.586 16:20:53 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.586 16:20:53 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.586 16:20:53 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.586 16:20:53 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.586 16:20:53 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:40.586 16:20:53 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:40.586 16:20:53 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.586 16:20:53 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.586 16:20:53 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:40.586 16:20:53 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:40.586 16:20:53 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.586 16:20:53 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:40.586 16:20:53 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.586 16:20:53 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:40.586 16:20:53 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:40.586 16:20:53 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.586 16:20:53 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:40.586 16:20:53 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.586 16:20:53 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.586 16:20:53 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.586 16:20:53 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:40.586 16:20:53 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.586 16:20:53 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:40.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.586 --rc genhtml_branch_coverage=1 00:06:40.586 --rc genhtml_function_coverage=1 00:06:40.586 --rc genhtml_legend=1 00:06:40.586 --rc geninfo_all_blocks=1 00:06:40.586 --rc geninfo_unexecuted_blocks=1 00:06:40.586 00:06:40.586 ' 00:06:40.586 16:20:53 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:40.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.586 --rc genhtml_branch_coverage=1 00:06:40.586 --rc genhtml_function_coverage=1 00:06:40.586 --rc genhtml_legend=1 00:06:40.586 --rc geninfo_all_blocks=1 00:06:40.586 --rc geninfo_unexecuted_blocks=1 00:06:40.586 00:06:40.586 ' 00:06:40.586 16:20:53 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:40.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.586 --rc genhtml_branch_coverage=1 00:06:40.586 --rc genhtml_function_coverage=1 00:06:40.586 --rc genhtml_legend=1 00:06:40.586 --rc geninfo_all_blocks=1 00:06:40.586 --rc geninfo_unexecuted_blocks=1 00:06:40.586 00:06:40.586 ' 00:06:40.586 16:20:53 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:40.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.586 --rc genhtml_branch_coverage=1 00:06:40.586 --rc genhtml_function_coverage=1 00:06:40.586 --rc genhtml_legend=1 00:06:40.586 --rc geninfo_all_blocks=1 00:06:40.586 --rc geninfo_unexecuted_blocks=1 00:06:40.586 00:06:40.586 ' 00:06:40.586 16:20:53 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:40.586 16:20:53 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:40.586 16:20:53 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:40.586 16:20:53 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:40.586 16:20:53 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:40.586 16:20:53 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:40.586 16:20:53 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:40.586 16:20:53 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:40.586 16:20:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:40.586 16:20:53 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58043 00:06:40.586 16:20:53 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58043 00:06:40.586 16:20:53 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:40.586 16:20:53 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 58043 ']' 00:06:40.586 16:20:53 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.586 16:20:53 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:40.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.586 16:20:53 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.586 16:20:53 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:40.586 16:20:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:40.586 [2024-11-05 16:20:53.473734] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:06:40.587 [2024-11-05 16:20:53.473877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58043 ] 00:06:40.587 [2024-11-05 16:20:53.659929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:40.846 [2024-11-05 16:20:53.793009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.846 [2024-11-05 16:20:53.793018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.783 16:20:54 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:41.783 16:20:54 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:06:41.783 16:20:54 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58065 00:06:41.783 16:20:54 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:41.783 16:20:54 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:42.042 [ 00:06:42.042 "bdev_malloc_delete", 00:06:42.042 "bdev_malloc_create", 00:06:42.042 "bdev_null_resize", 00:06:42.042 "bdev_null_delete", 00:06:42.042 "bdev_null_create", 00:06:42.042 "bdev_nvme_cuse_unregister", 00:06:42.042 "bdev_nvme_cuse_register", 00:06:42.042 "bdev_opal_new_user", 00:06:42.042 "bdev_opal_set_lock_state", 00:06:42.042 "bdev_opal_delete", 00:06:42.042 "bdev_opal_get_info", 00:06:42.042 "bdev_opal_create", 00:06:42.042 "bdev_nvme_opal_revert", 00:06:42.042 "bdev_nvme_opal_init", 00:06:42.042 "bdev_nvme_send_cmd", 00:06:42.042 "bdev_nvme_set_keys", 00:06:42.042 "bdev_nvme_get_path_iostat", 00:06:42.042 "bdev_nvme_get_mdns_discovery_info", 00:06:42.042 "bdev_nvme_stop_mdns_discovery", 00:06:42.042 "bdev_nvme_start_mdns_discovery", 00:06:42.042 "bdev_nvme_set_multipath_policy", 00:06:42.042 "bdev_nvme_set_preferred_path", 00:06:42.042 "bdev_nvme_get_io_paths", 00:06:42.042 "bdev_nvme_remove_error_injection", 00:06:42.042 "bdev_nvme_add_error_injection", 00:06:42.042 "bdev_nvme_get_discovery_info", 00:06:42.042 "bdev_nvme_stop_discovery", 00:06:42.042 "bdev_nvme_start_discovery", 00:06:42.042 "bdev_nvme_get_controller_health_info", 00:06:42.042 "bdev_nvme_disable_controller", 00:06:42.042 "bdev_nvme_enable_controller", 00:06:42.042 "bdev_nvme_reset_controller", 00:06:42.042 "bdev_nvme_get_transport_statistics", 00:06:42.042 "bdev_nvme_apply_firmware", 00:06:42.042 "bdev_nvme_detach_controller", 00:06:42.042 "bdev_nvme_get_controllers", 00:06:42.042 "bdev_nvme_attach_controller", 00:06:42.042 "bdev_nvme_set_hotplug", 00:06:42.042 "bdev_nvme_set_options", 00:06:42.042 "bdev_passthru_delete", 00:06:42.042 "bdev_passthru_create", 00:06:42.042 "bdev_lvol_set_parent_bdev", 00:06:42.042 "bdev_lvol_set_parent", 00:06:42.042 "bdev_lvol_check_shallow_copy", 00:06:42.042 "bdev_lvol_start_shallow_copy", 00:06:42.042 "bdev_lvol_grow_lvstore", 00:06:42.042 "bdev_lvol_get_lvols", 00:06:42.042 "bdev_lvol_get_lvstores", 00:06:42.042 "bdev_lvol_delete", 00:06:42.042 "bdev_lvol_set_read_only", 00:06:42.042 "bdev_lvol_resize", 00:06:42.042 "bdev_lvol_decouple_parent", 00:06:42.042 "bdev_lvol_inflate", 00:06:42.042 "bdev_lvol_rename", 00:06:42.042 "bdev_lvol_clone_bdev", 00:06:42.043 "bdev_lvol_clone", 00:06:42.043 "bdev_lvol_snapshot", 00:06:42.043 "bdev_lvol_create", 00:06:42.043 "bdev_lvol_delete_lvstore", 00:06:42.043 "bdev_lvol_rename_lvstore", 00:06:42.043 "bdev_lvol_create_lvstore", 00:06:42.043 "bdev_raid_set_options", 00:06:42.043 "bdev_raid_remove_base_bdev", 00:06:42.043 "bdev_raid_add_base_bdev", 00:06:42.043 "bdev_raid_delete", 00:06:42.043 "bdev_raid_create", 00:06:42.043 "bdev_raid_get_bdevs", 00:06:42.043 "bdev_error_inject_error", 00:06:42.043 "bdev_error_delete", 00:06:42.043 "bdev_error_create", 00:06:42.043 "bdev_split_delete", 00:06:42.043 "bdev_split_create", 00:06:42.043 "bdev_delay_delete", 00:06:42.043 "bdev_delay_create", 00:06:42.043 "bdev_delay_update_latency", 00:06:42.043 "bdev_zone_block_delete", 00:06:42.043 "bdev_zone_block_create", 00:06:42.043 "blobfs_create", 00:06:42.043 "blobfs_detect", 00:06:42.043 "blobfs_set_cache_size", 00:06:42.043 "bdev_aio_delete", 00:06:42.043 "bdev_aio_rescan", 00:06:42.043 "bdev_aio_create", 00:06:42.043 "bdev_ftl_set_property", 00:06:42.043 "bdev_ftl_get_properties", 00:06:42.043 "bdev_ftl_get_stats", 00:06:42.043 "bdev_ftl_unmap", 00:06:42.043 "bdev_ftl_unload", 00:06:42.043 "bdev_ftl_delete", 00:06:42.043 "bdev_ftl_load", 00:06:42.043 "bdev_ftl_create", 00:06:42.043 "bdev_virtio_attach_controller", 00:06:42.043 "bdev_virtio_scsi_get_devices", 00:06:42.043 "bdev_virtio_detach_controller", 00:06:42.043 "bdev_virtio_blk_set_hotplug", 00:06:42.043 "bdev_iscsi_delete", 00:06:42.043 "bdev_iscsi_create", 00:06:42.043 "bdev_iscsi_set_options", 00:06:42.043 "accel_error_inject_error", 00:06:42.043 "ioat_scan_accel_module", 00:06:42.043 "dsa_scan_accel_module", 00:06:42.043 "iaa_scan_accel_module", 00:06:42.043 "keyring_file_remove_key", 00:06:42.043 "keyring_file_add_key", 00:06:42.043 "keyring_linux_set_options", 00:06:42.043 "fsdev_aio_delete", 00:06:42.043 "fsdev_aio_create", 00:06:42.043 "iscsi_get_histogram", 00:06:42.043 "iscsi_enable_histogram", 00:06:42.043 "iscsi_set_options", 00:06:42.043 "iscsi_get_auth_groups", 00:06:42.043 "iscsi_auth_group_remove_secret", 00:06:42.043 "iscsi_auth_group_add_secret", 00:06:42.043 "iscsi_delete_auth_group", 00:06:42.043 "iscsi_create_auth_group", 00:06:42.043 "iscsi_set_discovery_auth", 00:06:42.043 "iscsi_get_options", 00:06:42.043 "iscsi_target_node_request_logout", 00:06:42.043 "iscsi_target_node_set_redirect", 00:06:42.043 "iscsi_target_node_set_auth", 00:06:42.043 "iscsi_target_node_add_lun", 00:06:42.043 "iscsi_get_stats", 00:06:42.043 "iscsi_get_connections", 00:06:42.043 "iscsi_portal_group_set_auth", 00:06:42.043 "iscsi_start_portal_group", 00:06:42.043 "iscsi_delete_portal_group", 00:06:42.043 "iscsi_create_portal_group", 00:06:42.043 "iscsi_get_portal_groups", 00:06:42.043 "iscsi_delete_target_node", 00:06:42.043 "iscsi_target_node_remove_pg_ig_maps", 00:06:42.043 "iscsi_target_node_add_pg_ig_maps", 00:06:42.043 "iscsi_create_target_node", 00:06:42.043 "iscsi_get_target_nodes", 00:06:42.043 "iscsi_delete_initiator_group", 00:06:42.043 "iscsi_initiator_group_remove_initiators", 00:06:42.043 "iscsi_initiator_group_add_initiators", 00:06:42.043 "iscsi_create_initiator_group", 00:06:42.043 "iscsi_get_initiator_groups", 00:06:42.043 "nvmf_set_crdt", 00:06:42.043 "nvmf_set_config", 00:06:42.043 "nvmf_set_max_subsystems", 00:06:42.043 "nvmf_stop_mdns_prr", 00:06:42.043 "nvmf_publish_mdns_prr", 00:06:42.043 "nvmf_subsystem_get_listeners", 00:06:42.043 "nvmf_subsystem_get_qpairs", 00:06:42.043 "nvmf_subsystem_get_controllers", 00:06:42.043 "nvmf_get_stats", 00:06:42.043 "nvmf_get_transports", 00:06:42.043 "nvmf_create_transport", 00:06:42.043 "nvmf_get_targets", 00:06:42.043 "nvmf_delete_target", 00:06:42.043 "nvmf_create_target", 00:06:42.043 "nvmf_subsystem_allow_any_host", 00:06:42.043 "nvmf_subsystem_set_keys", 00:06:42.043 "nvmf_subsystem_remove_host", 00:06:42.043 "nvmf_subsystem_add_host", 00:06:42.043 "nvmf_ns_remove_host", 00:06:42.043 "nvmf_ns_add_host", 00:06:42.043 "nvmf_subsystem_remove_ns", 00:06:42.043 "nvmf_subsystem_set_ns_ana_group", 00:06:42.043 "nvmf_subsystem_add_ns", 00:06:42.043 "nvmf_subsystem_listener_set_ana_state", 00:06:42.043 "nvmf_discovery_get_referrals", 00:06:42.043 "nvmf_discovery_remove_referral", 00:06:42.043 "nvmf_discovery_add_referral", 00:06:42.043 "nvmf_subsystem_remove_listener", 00:06:42.043 "nvmf_subsystem_add_listener", 00:06:42.043 "nvmf_delete_subsystem", 00:06:42.043 "nvmf_create_subsystem", 00:06:42.043 "nvmf_get_subsystems", 00:06:42.043 "env_dpdk_get_mem_stats", 00:06:42.043 "nbd_get_disks", 00:06:42.043 "nbd_stop_disk", 00:06:42.043 "nbd_start_disk", 00:06:42.043 "ublk_recover_disk", 00:06:42.043 "ublk_get_disks", 00:06:42.043 "ublk_stop_disk", 00:06:42.043 "ublk_start_disk", 00:06:42.043 "ublk_destroy_target", 00:06:42.043 "ublk_create_target", 00:06:42.043 "virtio_blk_create_transport", 00:06:42.043 "virtio_blk_get_transports", 00:06:42.043 "vhost_controller_set_coalescing", 00:06:42.043 "vhost_get_controllers", 00:06:42.043 "vhost_delete_controller", 00:06:42.043 "vhost_create_blk_controller", 00:06:42.043 "vhost_scsi_controller_remove_target", 00:06:42.043 "vhost_scsi_controller_add_target", 00:06:42.043 "vhost_start_scsi_controller", 00:06:42.043 "vhost_create_scsi_controller", 00:06:42.043 "thread_set_cpumask", 00:06:42.043 "scheduler_set_options", 00:06:42.043 "framework_get_governor", 00:06:42.043 "framework_get_scheduler", 00:06:42.043 "framework_set_scheduler", 00:06:42.043 "framework_get_reactors", 00:06:42.043 "thread_get_io_channels", 00:06:42.043 "thread_get_pollers", 00:06:42.043 "thread_get_stats", 00:06:42.043 "framework_monitor_context_switch", 00:06:42.043 "spdk_kill_instance", 00:06:42.043 "log_enable_timestamps", 00:06:42.043 "log_get_flags", 00:06:42.043 "log_clear_flag", 00:06:42.043 "log_set_flag", 00:06:42.043 "log_get_level", 00:06:42.043 "log_set_level", 00:06:42.043 "log_get_print_level", 00:06:42.043 "log_set_print_level", 00:06:42.043 "framework_enable_cpumask_locks", 00:06:42.043 "framework_disable_cpumask_locks", 00:06:42.043 "framework_wait_init", 00:06:42.043 "framework_start_init", 00:06:42.043 "scsi_get_devices", 00:06:42.043 "bdev_get_histogram", 00:06:42.043 "bdev_enable_histogram", 00:06:42.043 "bdev_set_qos_limit", 00:06:42.043 "bdev_set_qd_sampling_period", 00:06:42.043 "bdev_get_bdevs", 00:06:42.043 "bdev_reset_iostat", 00:06:42.043 "bdev_get_iostat", 00:06:42.043 "bdev_examine", 00:06:42.043 "bdev_wait_for_examine", 00:06:42.043 "bdev_set_options", 00:06:42.043 "accel_get_stats", 00:06:42.043 "accel_set_options", 00:06:42.043 "accel_set_driver", 00:06:42.043 "accel_crypto_key_destroy", 00:06:42.043 "accel_crypto_keys_get", 00:06:42.043 "accel_crypto_key_create", 00:06:42.043 "accel_assign_opc", 00:06:42.043 "accel_get_module_info", 00:06:42.043 "accel_get_opc_assignments", 00:06:42.043 "vmd_rescan", 00:06:42.043 "vmd_remove_device", 00:06:42.043 "vmd_enable", 00:06:42.043 "sock_get_default_impl", 00:06:42.043 "sock_set_default_impl", 00:06:42.043 "sock_impl_set_options", 00:06:42.043 "sock_impl_get_options", 00:06:42.043 "iobuf_get_stats", 00:06:42.043 "iobuf_set_options", 00:06:42.043 "keyring_get_keys", 00:06:42.043 "framework_get_pci_devices", 00:06:42.043 "framework_get_config", 00:06:42.043 "framework_get_subsystems", 00:06:42.043 "fsdev_set_opts", 00:06:42.043 "fsdev_get_opts", 00:06:42.043 "trace_get_info", 00:06:42.043 "trace_get_tpoint_group_mask", 00:06:42.043 "trace_disable_tpoint_group", 00:06:42.043 "trace_enable_tpoint_group", 00:06:42.044 "trace_clear_tpoint_mask", 00:06:42.044 "trace_set_tpoint_mask", 00:06:42.044 "notify_get_notifications", 00:06:42.044 "notify_get_types", 00:06:42.044 "spdk_get_version", 00:06:42.044 "rpc_get_methods" 00:06:42.044 ] 00:06:42.044 16:20:55 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:42.044 16:20:55 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:42.044 16:20:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:42.044 16:20:55 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:42.044 16:20:55 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58043 00:06:42.044 16:20:55 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 58043 ']' 00:06:42.044 16:20:55 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 58043 00:06:42.044 16:20:55 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:06:42.044 16:20:55 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:42.044 16:20:55 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58043 00:06:42.044 killing process with pid 58043 00:06:42.044 16:20:55 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:42.044 16:20:55 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:42.044 16:20:55 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58043' 00:06:42.044 16:20:55 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 58043 00:06:42.044 16:20:55 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 58043 00:06:45.333 ************************************ 00:06:45.333 END TEST spdkcli_tcp 00:06:45.333 ************************************ 00:06:45.333 00:06:45.333 real 0m4.678s 00:06:45.333 user 0m8.477s 00:06:45.333 sys 0m0.623s 00:06:45.333 16:20:57 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:45.333 16:20:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:45.333 16:20:57 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:45.333 16:20:57 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:45.333 16:20:57 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:45.333 16:20:57 -- common/autotest_common.sh@10 -- # set +x 00:06:45.333 ************************************ 00:06:45.333 START TEST dpdk_mem_utility 00:06:45.333 ************************************ 00:06:45.333 16:20:57 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:45.333 * Looking for test storage... 00:06:45.333 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:45.333 16:20:57 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:45.333 16:20:57 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:06:45.333 16:20:57 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:45.333 16:20:58 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:45.333 16:20:58 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.333 16:20:58 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.333 16:20:58 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.333 16:20:58 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.333 16:20:58 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.333 16:20:58 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.333 16:20:58 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.333 16:20:58 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.333 16:20:58 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.333 16:20:58 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.333 16:20:58 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.333 16:20:58 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:45.333 16:20:58 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:45.333 16:20:58 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.333 16:20:58 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.333 16:20:58 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:45.333 16:20:58 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:45.333 16:20:58 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.333 16:20:58 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:45.333 16:20:58 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.333 16:20:58 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:45.333 16:20:58 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:45.333 16:20:58 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.333 16:20:58 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:45.333 16:20:58 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.333 16:20:58 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.333 16:20:58 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.333 16:20:58 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:45.333 16:20:58 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.333 16:20:58 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:45.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.333 --rc genhtml_branch_coverage=1 00:06:45.333 --rc genhtml_function_coverage=1 00:06:45.333 --rc genhtml_legend=1 00:06:45.333 --rc geninfo_all_blocks=1 00:06:45.333 --rc geninfo_unexecuted_blocks=1 00:06:45.333 00:06:45.333 ' 00:06:45.333 16:20:58 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:45.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.333 --rc genhtml_branch_coverage=1 00:06:45.333 --rc genhtml_function_coverage=1 00:06:45.333 --rc genhtml_legend=1 00:06:45.333 --rc geninfo_all_blocks=1 00:06:45.333 --rc geninfo_unexecuted_blocks=1 00:06:45.333 00:06:45.333 ' 00:06:45.333 16:20:58 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:45.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.333 --rc genhtml_branch_coverage=1 00:06:45.333 --rc genhtml_function_coverage=1 00:06:45.333 --rc genhtml_legend=1 00:06:45.333 --rc geninfo_all_blocks=1 00:06:45.333 --rc geninfo_unexecuted_blocks=1 00:06:45.333 00:06:45.333 ' 00:06:45.333 16:20:58 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:45.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.333 --rc genhtml_branch_coverage=1 00:06:45.333 --rc genhtml_function_coverage=1 00:06:45.333 --rc genhtml_legend=1 00:06:45.333 --rc geninfo_all_blocks=1 00:06:45.333 --rc geninfo_unexecuted_blocks=1 00:06:45.333 00:06:45.333 ' 00:06:45.333 16:20:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:45.333 16:20:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58176 00:06:45.333 16:20:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:45.333 16:20:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58176 00:06:45.333 16:20:58 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 58176 ']' 00:06:45.333 16:20:58 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.333 16:20:58 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:45.333 16:20:58 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.333 16:20:58 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:45.333 16:20:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:45.333 [2024-11-05 16:20:58.146957] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:06:45.333 [2024-11-05 16:20:58.147080] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58176 ] 00:06:45.333 [2024-11-05 16:20:58.323034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.592 [2024-11-05 16:20:58.447199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.530 16:20:59 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:46.530 16:20:59 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:06:46.530 16:20:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:46.530 16:20:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:46.530 16:20:59 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.530 16:20:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:46.530 { 00:06:46.530 "filename": "/tmp/spdk_mem_dump.txt" 00:06:46.530 } 00:06:46.530 16:20:59 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.530 16:20:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:46.530 DPDK memory size 816.000000 MiB in 1 heap(s) 00:06:46.530 1 heaps totaling size 816.000000 MiB 00:06:46.530 size: 816.000000 MiB heap id: 0 00:06:46.530 end heaps---------- 00:06:46.530 9 mempools totaling size 595.772034 MiB 00:06:46.531 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:46.531 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:46.531 size: 92.545471 MiB name: bdev_io_58176 00:06:46.531 size: 50.003479 MiB name: msgpool_58176 00:06:46.531 size: 36.509338 MiB name: fsdev_io_58176 00:06:46.531 size: 21.763794 MiB name: PDU_Pool 00:06:46.531 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:46.531 size: 4.133484 MiB name: evtpool_58176 00:06:46.531 size: 0.026123 MiB name: Session_Pool 00:06:46.531 end mempools------- 00:06:46.531 6 memzones totaling size 4.142822 MiB 00:06:46.531 size: 1.000366 MiB name: RG_ring_0_58176 00:06:46.531 size: 1.000366 MiB name: RG_ring_1_58176 00:06:46.531 size: 1.000366 MiB name: RG_ring_4_58176 00:06:46.531 size: 1.000366 MiB name: RG_ring_5_58176 00:06:46.531 size: 0.125366 MiB name: RG_ring_2_58176 00:06:46.531 size: 0.015991 MiB name: RG_ring_3_58176 00:06:46.531 end memzones------- 00:06:46.531 16:20:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:46.531 heap id: 0 total size: 816.000000 MiB number of busy elements: 318 number of free elements: 18 00:06:46.531 list of free elements. size: 16.790649 MiB 00:06:46.531 element at address: 0x200006400000 with size: 1.995972 MiB 00:06:46.531 element at address: 0x20000a600000 with size: 1.995972 MiB 00:06:46.531 element at address: 0x200003e00000 with size: 1.991028 MiB 00:06:46.531 element at address: 0x200018d00040 with size: 0.999939 MiB 00:06:46.531 element at address: 0x200019100040 with size: 0.999939 MiB 00:06:46.531 element at address: 0x200019200000 with size: 0.999084 MiB 00:06:46.531 element at address: 0x200031e00000 with size: 0.994324 MiB 00:06:46.531 element at address: 0x200000400000 with size: 0.992004 MiB 00:06:46.531 element at address: 0x200018a00000 with size: 0.959656 MiB 00:06:46.531 element at address: 0x200019500040 with size: 0.936401 MiB 00:06:46.531 element at address: 0x200000200000 with size: 0.716980 MiB 00:06:46.531 element at address: 0x20001ac00000 with size: 0.560974 MiB 00:06:46.531 element at address: 0x200000c00000 with size: 0.490173 MiB 00:06:46.531 element at address: 0x200018e00000 with size: 0.487976 MiB 00:06:46.531 element at address: 0x200019600000 with size: 0.485413 MiB 00:06:46.531 element at address: 0x200012c00000 with size: 0.443481 MiB 00:06:46.531 element at address: 0x200028000000 with size: 0.390442 MiB 00:06:46.531 element at address: 0x200000800000 with size: 0.350891 MiB 00:06:46.531 list of standard malloc elements. size: 199.288452 MiB 00:06:46.531 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:06:46.531 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:06:46.531 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:06:46.531 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:06:46.531 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:46.531 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:46.531 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:06:46.531 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:46.531 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:06:46.531 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:06:46.531 element at address: 0x200012bff040 with size: 0.000305 MiB 00:06:46.531 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:46.531 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:46.531 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:06:46.531 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:06:46.531 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:06:46.531 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:06:46.531 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:06:46.531 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:06:46.531 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:06:46.531 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:06:46.531 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:06:46.531 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:06:46.531 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:06:46.531 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:06:46.531 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:06:46.531 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:06:46.531 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:06:46.531 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:06:46.531 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:06:46.531 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:06:46.531 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:06:46.531 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:06:46.531 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:06:46.531 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:06:46.531 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:06:46.531 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:06:46.531 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:06:46.531 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:06:46.531 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:06:46.531 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:06:46.531 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:06:46.531 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:06:46.531 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:06:46.531 element at address: 0x200000cff000 with size: 0.000244 MiB 00:06:46.531 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:06:46.531 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:06:46.531 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:06:46.531 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:06:46.531 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:06:46.531 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:06:46.531 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:06:46.531 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:06:46.532 element at address: 0x200012bff180 with size: 0.000244 MiB 00:06:46.532 element at address: 0x200012bff280 with size: 0.000244 MiB 00:06:46.532 element at address: 0x200012bff380 with size: 0.000244 MiB 00:06:46.532 element at address: 0x200012bff480 with size: 0.000244 MiB 00:06:46.532 element at address: 0x200012bff580 with size: 0.000244 MiB 00:06:46.532 element at address: 0x200012bff680 with size: 0.000244 MiB 00:06:46.532 element at address: 0x200012bff780 with size: 0.000244 MiB 00:06:46.532 element at address: 0x200012bff880 with size: 0.000244 MiB 00:06:46.532 element at address: 0x200012bff980 with size: 0.000244 MiB 00:06:46.532 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:06:46.532 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:06:46.532 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:06:46.532 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:06:46.532 element at address: 0x200012c71880 with size: 0.000244 MiB 00:06:46.532 element at address: 0x200012c71980 with size: 0.000244 MiB 00:06:46.532 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:06:46.532 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:06:46.532 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:06:46.532 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:06:46.532 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:06:46.532 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:06:46.532 element at address: 0x200012c72080 with size: 0.000244 MiB 00:06:46.532 element at address: 0x200012c72180 with size: 0.000244 MiB 00:06:46.532 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:06:46.532 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:06:46.532 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:06:46.532 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:06:46.532 element at address: 0x200028063f40 with size: 0.000244 MiB 00:06:46.532 element at address: 0x200028064040 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20002806af80 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20002806b080 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20002806b180 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20002806b280 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20002806b380 with size: 0.000244 MiB 00:06:46.532 element at address: 0x20002806b480 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806b580 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806b680 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806b780 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806b880 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806b980 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806be80 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806c080 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806c180 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806c280 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806c380 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806c480 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806c580 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806c680 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806c780 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806c880 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806c980 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806d080 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806d180 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806d280 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806d380 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806d480 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806d580 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806d680 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806d780 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806d880 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806d980 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806da80 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806db80 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806de80 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806df80 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806e080 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806e180 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806e280 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806e380 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806e480 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806e580 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806e680 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806e780 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806e880 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806e980 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806f080 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806f180 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806f280 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806f380 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806f480 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806f580 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806f680 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806f780 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806f880 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806f980 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:06:46.533 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:06:46.533 list of memzone associated elements. size: 599.920898 MiB 00:06:46.533 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:06:46.533 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:46.533 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:06:46.533 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:46.533 element at address: 0x200012df4740 with size: 92.045105 MiB 00:06:46.533 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58176_0 00:06:46.533 element at address: 0x200000dff340 with size: 48.003113 MiB 00:06:46.533 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58176_0 00:06:46.533 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:06:46.533 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58176_0 00:06:46.533 element at address: 0x2000197be900 with size: 20.255615 MiB 00:06:46.533 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:46.533 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:06:46.533 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:46.533 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:06:46.533 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58176_0 00:06:46.533 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:06:46.533 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58176 00:06:46.533 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:46.533 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58176 00:06:46.533 element at address: 0x200018efde00 with size: 1.008179 MiB 00:06:46.533 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:46.533 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:06:46.533 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:46.533 element at address: 0x200018afde00 with size: 1.008179 MiB 00:06:46.533 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:46.533 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:06:46.533 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:46.533 element at address: 0x200000cff100 with size: 1.000549 MiB 00:06:46.533 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58176 00:06:46.533 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:06:46.533 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58176 00:06:46.533 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:06:46.533 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58176 00:06:46.533 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:06:46.533 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58176 00:06:46.533 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:06:46.533 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58176 00:06:46.533 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:06:46.533 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58176 00:06:46.533 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:06:46.533 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:46.533 element at address: 0x200012c72280 with size: 0.500549 MiB 00:06:46.533 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:46.533 element at address: 0x20001967c440 with size: 0.250549 MiB 00:06:46.533 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:46.533 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:06:46.533 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58176 00:06:46.533 element at address: 0x20000085df80 with size: 0.125549 MiB 00:06:46.533 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58176 00:06:46.533 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:06:46.533 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:46.533 element at address: 0x200028064140 with size: 0.023804 MiB 00:06:46.533 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:46.533 element at address: 0x200000859d40 with size: 0.016174 MiB 00:06:46.533 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58176 00:06:46.533 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:06:46.534 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:46.534 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:06:46.534 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58176 00:06:46.534 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:06:46.534 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58176 00:06:46.534 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:06:46.534 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58176 00:06:46.534 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:06:46.534 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:46.534 16:20:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:46.534 16:20:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58176 00:06:46.534 16:20:59 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 58176 ']' 00:06:46.534 16:20:59 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 58176 00:06:46.534 16:20:59 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:06:46.534 16:20:59 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:46.534 16:20:59 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58176 00:06:46.792 killing process with pid 58176 00:06:46.792 16:20:59 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:46.792 16:20:59 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:46.792 16:20:59 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58176' 00:06:46.792 16:20:59 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 58176 00:06:46.792 16:20:59 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 58176 00:06:50.083 00:06:50.083 real 0m4.620s 00:06:50.083 user 0m4.673s 00:06:50.083 sys 0m0.552s 00:06:50.083 16:21:02 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:50.083 ************************************ 00:06:50.083 END TEST dpdk_mem_utility 00:06:50.083 ************************************ 00:06:50.083 16:21:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:50.083 16:21:02 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:50.083 16:21:02 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:50.083 16:21:02 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:50.083 16:21:02 -- common/autotest_common.sh@10 -- # set +x 00:06:50.083 ************************************ 00:06:50.083 START TEST event 00:06:50.083 ************************************ 00:06:50.083 16:21:02 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:50.083 * Looking for test storage... 00:06:50.083 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:50.083 16:21:02 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:50.083 16:21:02 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:50.083 16:21:02 event -- common/autotest_common.sh@1691 -- # lcov --version 00:06:50.083 16:21:02 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:50.083 16:21:02 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.083 16:21:02 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.083 16:21:02 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.083 16:21:02 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.083 16:21:02 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.083 16:21:02 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.083 16:21:02 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.083 16:21:02 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.083 16:21:02 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.083 16:21:02 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.083 16:21:02 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.083 16:21:02 event -- scripts/common.sh@344 -- # case "$op" in 00:06:50.083 16:21:02 event -- scripts/common.sh@345 -- # : 1 00:06:50.083 16:21:02 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.083 16:21:02 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.083 16:21:02 event -- scripts/common.sh@365 -- # decimal 1 00:06:50.083 16:21:02 event -- scripts/common.sh@353 -- # local d=1 00:06:50.083 16:21:02 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.083 16:21:02 event -- scripts/common.sh@355 -- # echo 1 00:06:50.083 16:21:02 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.083 16:21:02 event -- scripts/common.sh@366 -- # decimal 2 00:06:50.083 16:21:02 event -- scripts/common.sh@353 -- # local d=2 00:06:50.083 16:21:02 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.083 16:21:02 event -- scripts/common.sh@355 -- # echo 2 00:06:50.083 16:21:02 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.083 16:21:02 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.083 16:21:02 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.083 16:21:02 event -- scripts/common.sh@368 -- # return 0 00:06:50.083 16:21:02 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.083 16:21:02 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:50.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.083 --rc genhtml_branch_coverage=1 00:06:50.083 --rc genhtml_function_coverage=1 00:06:50.083 --rc genhtml_legend=1 00:06:50.083 --rc geninfo_all_blocks=1 00:06:50.083 --rc geninfo_unexecuted_blocks=1 00:06:50.083 00:06:50.083 ' 00:06:50.083 16:21:02 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:50.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.083 --rc genhtml_branch_coverage=1 00:06:50.083 --rc genhtml_function_coverage=1 00:06:50.083 --rc genhtml_legend=1 00:06:50.083 --rc geninfo_all_blocks=1 00:06:50.083 --rc geninfo_unexecuted_blocks=1 00:06:50.083 00:06:50.083 ' 00:06:50.083 16:21:02 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:50.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.083 --rc genhtml_branch_coverage=1 00:06:50.083 --rc genhtml_function_coverage=1 00:06:50.083 --rc genhtml_legend=1 00:06:50.083 --rc geninfo_all_blocks=1 00:06:50.083 --rc geninfo_unexecuted_blocks=1 00:06:50.083 00:06:50.083 ' 00:06:50.083 16:21:02 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:50.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.083 --rc genhtml_branch_coverage=1 00:06:50.083 --rc genhtml_function_coverage=1 00:06:50.083 --rc genhtml_legend=1 00:06:50.083 --rc geninfo_all_blocks=1 00:06:50.083 --rc geninfo_unexecuted_blocks=1 00:06:50.083 00:06:50.083 ' 00:06:50.083 16:21:02 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:50.083 16:21:02 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:50.083 16:21:02 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:50.083 16:21:02 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:06:50.083 16:21:02 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:50.083 16:21:02 event -- common/autotest_common.sh@10 -- # set +x 00:06:50.083 ************************************ 00:06:50.083 START TEST event_perf 00:06:50.083 ************************************ 00:06:50.083 16:21:02 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:50.083 Running I/O for 1 seconds...[2024-11-05 16:21:02.835808] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:06:50.083 [2024-11-05 16:21:02.835946] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58284 ] 00:06:50.083 [2024-11-05 16:21:03.018291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:50.083 [2024-11-05 16:21:03.169946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.083 [2024-11-05 16:21:03.170191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.083 Running I/O for 1 seconds...[2024-11-05 16:21:03.170094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.083 [2024-11-05 16:21:03.170227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:51.459 00:06:51.459 lcore 0: 184075 00:06:51.459 lcore 1: 184073 00:06:51.459 lcore 2: 184075 00:06:51.459 lcore 3: 184074 00:06:51.459 done. 00:06:51.459 ************************************ 00:06:51.459 END TEST event_perf 00:06:51.459 ************************************ 00:06:51.459 00:06:51.459 real 0m1.643s 00:06:51.459 user 0m4.398s 00:06:51.459 sys 0m0.117s 00:06:51.459 16:21:04 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:51.459 16:21:04 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:51.459 16:21:04 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:51.459 16:21:04 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:51.459 16:21:04 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:51.459 16:21:04 event -- common/autotest_common.sh@10 -- # set +x 00:06:51.459 ************************************ 00:06:51.459 START TEST event_reactor 00:06:51.459 ************************************ 00:06:51.459 16:21:04 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:51.459 [2024-11-05 16:21:04.541185] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:06:51.459 [2024-11-05 16:21:04.541402] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58329 ] 00:06:51.717 [2024-11-05 16:21:04.714689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.976 [2024-11-05 16:21:04.839297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.355 test_start 00:06:53.355 oneshot 00:06:53.355 tick 100 00:06:53.355 tick 100 00:06:53.355 tick 250 00:06:53.355 tick 100 00:06:53.355 tick 100 00:06:53.355 tick 250 00:06:53.355 tick 100 00:06:53.355 tick 500 00:06:53.355 tick 100 00:06:53.355 tick 100 00:06:53.355 tick 250 00:06:53.355 tick 100 00:06:53.355 tick 100 00:06:53.355 test_end 00:06:53.355 00:06:53.355 real 0m1.568s 00:06:53.355 user 0m1.376s 00:06:53.355 sys 0m0.084s 00:06:53.355 16:21:06 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:53.355 16:21:06 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:53.355 ************************************ 00:06:53.355 END TEST event_reactor 00:06:53.355 ************************************ 00:06:53.355 16:21:06 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:53.355 16:21:06 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:53.355 16:21:06 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:53.355 16:21:06 event -- common/autotest_common.sh@10 -- # set +x 00:06:53.355 ************************************ 00:06:53.355 START TEST event_reactor_perf 00:06:53.355 ************************************ 00:06:53.355 16:21:06 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:53.355 [2024-11-05 16:21:06.174239] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:06:53.355 [2024-11-05 16:21:06.174348] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58365 ] 00:06:53.355 [2024-11-05 16:21:06.352797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.614 [2024-11-05 16:21:06.474392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.994 test_start 00:06:54.994 test_end 00:06:54.994 Performance: 361614 events per second 00:06:54.994 00:06:54.994 real 0m1.575s 00:06:54.994 user 0m1.383s 00:06:54.994 sys 0m0.084s 00:06:54.994 ************************************ 00:06:54.994 END TEST event_reactor_perf 00:06:54.994 ************************************ 00:06:54.994 16:21:07 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:54.994 16:21:07 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:54.994 16:21:07 event -- event/event.sh@49 -- # uname -s 00:06:54.994 16:21:07 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:54.994 16:21:07 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:54.994 16:21:07 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:54.994 16:21:07 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:54.994 16:21:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:54.994 ************************************ 00:06:54.994 START TEST event_scheduler 00:06:54.994 ************************************ 00:06:54.994 16:21:07 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:54.994 * Looking for test storage... 00:06:54.994 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:54.994 16:21:07 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:54.994 16:21:07 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:06:54.994 16:21:07 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:54.994 16:21:07 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:54.994 16:21:07 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:54.994 16:21:07 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:54.994 16:21:07 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:54.994 16:21:07 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.994 16:21:07 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:54.994 16:21:07 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:54.994 16:21:07 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:54.994 16:21:07 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:54.994 16:21:07 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:54.994 16:21:07 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:54.994 16:21:07 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:54.994 16:21:07 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:54.994 16:21:07 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:54.994 16:21:07 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:54.994 16:21:07 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.994 16:21:07 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:54.994 16:21:07 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:54.994 16:21:07 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.994 16:21:07 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:54.994 16:21:07 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:54.994 16:21:07 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:54.994 16:21:07 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:54.994 16:21:07 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.994 16:21:07 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:54.994 16:21:07 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:54.994 16:21:07 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:54.994 16:21:07 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:54.994 16:21:07 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:54.994 16:21:07 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.994 16:21:07 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:54.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.994 --rc genhtml_branch_coverage=1 00:06:54.994 --rc genhtml_function_coverage=1 00:06:54.994 --rc genhtml_legend=1 00:06:54.994 --rc geninfo_all_blocks=1 00:06:54.994 --rc geninfo_unexecuted_blocks=1 00:06:54.994 00:06:54.994 ' 00:06:54.994 16:21:07 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:54.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.994 --rc genhtml_branch_coverage=1 00:06:54.994 --rc genhtml_function_coverage=1 00:06:54.994 --rc genhtml_legend=1 00:06:54.994 --rc geninfo_all_blocks=1 00:06:54.994 --rc geninfo_unexecuted_blocks=1 00:06:54.994 00:06:54.994 ' 00:06:54.994 16:21:07 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:54.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.994 --rc genhtml_branch_coverage=1 00:06:54.994 --rc genhtml_function_coverage=1 00:06:54.994 --rc genhtml_legend=1 00:06:54.994 --rc geninfo_all_blocks=1 00:06:54.994 --rc geninfo_unexecuted_blocks=1 00:06:54.994 00:06:54.994 ' 00:06:54.994 16:21:07 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:54.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.994 --rc genhtml_branch_coverage=1 00:06:54.994 --rc genhtml_function_coverage=1 00:06:54.994 --rc genhtml_legend=1 00:06:54.994 --rc geninfo_all_blocks=1 00:06:54.994 --rc geninfo_unexecuted_blocks=1 00:06:54.994 00:06:54.994 ' 00:06:54.994 16:21:07 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:54.994 16:21:07 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58436 00:06:54.994 16:21:07 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:54.994 16:21:07 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:54.994 16:21:07 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58436 00:06:54.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.994 16:21:07 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 58436 ']' 00:06:54.994 16:21:07 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.994 16:21:07 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:54.994 16:21:07 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.994 16:21:07 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:54.994 16:21:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:54.994 [2024-11-05 16:21:08.078831] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:06:54.994 [2024-11-05 16:21:08.078956] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58436 ] 00:06:55.253 [2024-11-05 16:21:08.256957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:55.512 [2024-11-05 16:21:08.383617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.512 [2024-11-05 16:21:08.383805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.512 [2024-11-05 16:21:08.383952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.512 [2024-11-05 16:21:08.383986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:56.089 16:21:08 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:56.089 16:21:08 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:06:56.089 16:21:08 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:56.089 16:21:08 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.089 16:21:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:56.089 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:56.089 POWER: Cannot set governor of lcore 0 to userspace 00:06:56.089 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:56.089 POWER: Cannot set governor of lcore 0 to performance 00:06:56.089 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:56.089 POWER: Cannot set governor of lcore 0 to userspace 00:06:56.089 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:56.089 POWER: Cannot set governor of lcore 0 to userspace 00:06:56.089 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:56.089 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:56.089 POWER: Unable to set Power Management Environment for lcore 0 00:06:56.089 [2024-11-05 16:21:08.940529] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:56.089 [2024-11-05 16:21:08.940556] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:56.089 [2024-11-05 16:21:08.940568] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:56.089 [2024-11-05 16:21:08.940590] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:56.089 [2024-11-05 16:21:08.940600] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:56.089 [2024-11-05 16:21:08.940611] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:56.089 16:21:08 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.089 16:21:08 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:56.089 16:21:08 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.089 16:21:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:56.379 [2024-11-05 16:21:09.282766] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:56.379 16:21:09 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.379 16:21:09 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:56.379 16:21:09 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:56.379 16:21:09 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:56.379 16:21:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:56.379 ************************************ 00:06:56.379 START TEST scheduler_create_thread 00:06:56.379 ************************************ 00:06:56.379 16:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:06:56.380 16:21:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:56.380 16:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.380 16:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.380 2 00:06:56.380 16:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.380 16:21:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:56.380 16:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.380 16:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.380 3 00:06:56.380 16:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.380 16:21:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:56.380 16:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.380 16:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.380 4 00:06:56.380 16:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.380 16:21:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:56.380 16:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.380 16:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.380 5 00:06:56.380 16:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.380 16:21:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:56.380 16:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.380 16:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.380 6 00:06:56.380 16:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.380 16:21:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:56.380 16:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.380 16:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.380 7 00:06:56.380 16:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.380 16:21:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:56.380 16:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.380 16:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.380 8 00:06:56.380 16:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.380 16:21:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:56.380 16:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.380 16:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.380 9 00:06:56.380 16:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.380 16:21:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:56.380 16:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.380 16:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.380 10 00:06:56.380 16:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.380 16:21:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:56.380 16:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.380 16:21:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.774 16:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.774 16:21:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:57.774 16:21:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:57.774 16:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.774 16:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.713 16:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.713 16:21:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:58.713 16:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.713 16:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.650 16:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.650 16:21:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:59.650 16:21:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:59.650 16:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.650 16:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.219 ************************************ 00:07:00.219 END TEST scheduler_create_thread 00:07:00.219 ************************************ 00:07:00.219 16:21:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.219 00:07:00.219 real 0m3.886s 00:07:00.219 user 0m0.031s 00:07:00.219 sys 0m0.007s 00:07:00.219 16:21:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:00.219 16:21:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.219 16:21:13 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:00.219 16:21:13 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58436 00:07:00.219 16:21:13 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 58436 ']' 00:07:00.219 16:21:13 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 58436 00:07:00.219 16:21:13 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:07:00.219 16:21:13 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:00.219 16:21:13 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58436 00:07:00.219 killing process with pid 58436 00:07:00.219 16:21:13 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:07:00.219 16:21:13 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:07:00.219 16:21:13 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58436' 00:07:00.219 16:21:13 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 58436 00:07:00.219 16:21:13 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 58436 00:07:00.489 [2024-11-05 16:21:13.561669] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:01.885 ************************************ 00:07:01.885 END TEST event_scheduler 00:07:01.885 ************************************ 00:07:01.885 00:07:01.885 real 0m6.993s 00:07:01.885 user 0m14.439s 00:07:01.885 sys 0m0.500s 00:07:01.885 16:21:14 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:01.885 16:21:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:01.885 16:21:14 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:01.885 16:21:14 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:01.885 16:21:14 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:01.885 16:21:14 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:01.885 16:21:14 event -- common/autotest_common.sh@10 -- # set +x 00:07:01.885 ************************************ 00:07:01.885 START TEST app_repeat 00:07:01.885 ************************************ 00:07:01.885 16:21:14 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:07:01.885 16:21:14 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.885 16:21:14 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.886 16:21:14 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:01.886 16:21:14 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:01.886 16:21:14 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:01.886 16:21:14 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:01.886 16:21:14 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:01.886 Process app_repeat pid: 58564 00:07:01.886 spdk_app_start Round 0 00:07:01.886 16:21:14 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58564 00:07:01.886 16:21:14 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:01.886 16:21:14 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:01.886 16:21:14 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58564' 00:07:01.886 16:21:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:01.886 16:21:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:01.886 16:21:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58564 /var/tmp/spdk-nbd.sock 00:07:01.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:01.886 16:21:14 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58564 ']' 00:07:01.886 16:21:14 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:01.886 16:21:14 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:01.886 16:21:14 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:01.886 16:21:14 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:01.886 16:21:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:01.886 [2024-11-05 16:21:14.894990] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:07:01.886 [2024-11-05 16:21:14.895124] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58564 ] 00:07:02.144 [2024-11-05 16:21:15.073630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:02.144 [2024-11-05 16:21:15.186826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.144 [2024-11-05 16:21:15.186864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.712 16:21:15 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:02.712 16:21:15 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:07:02.712 16:21:15 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:02.971 Malloc0 00:07:03.230 16:21:16 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:03.230 Malloc1 00:07:03.489 16:21:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:03.489 16:21:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.489 16:21:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:03.489 16:21:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:03.489 16:21:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.489 16:21:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:03.489 16:21:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:03.489 16:21:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.489 16:21:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:03.489 16:21:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:03.489 16:21:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.489 16:21:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:03.489 16:21:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:03.489 16:21:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:03.489 16:21:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:03.489 16:21:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:03.489 /dev/nbd0 00:07:03.489 16:21:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:03.489 16:21:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:03.489 16:21:16 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:03.489 16:21:16 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:03.489 16:21:16 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:03.489 16:21:16 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:03.489 16:21:16 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:03.489 16:21:16 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:03.489 16:21:16 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:03.489 16:21:16 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:03.489 16:21:16 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:03.489 1+0 records in 00:07:03.489 1+0 records out 00:07:03.489 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378031 s, 10.8 MB/s 00:07:03.489 16:21:16 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:03.489 16:21:16 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:03.489 16:21:16 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:03.747 16:21:16 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:03.747 16:21:16 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:03.747 16:21:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:03.747 16:21:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:03.747 16:21:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:04.006 /dev/nbd1 00:07:04.006 16:21:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:04.006 16:21:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:04.006 16:21:16 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:07:04.006 16:21:16 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:04.006 16:21:16 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:04.006 16:21:16 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:04.006 16:21:16 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:07:04.006 16:21:16 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:04.006 16:21:16 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:04.006 16:21:16 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:04.006 16:21:16 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:04.006 1+0 records in 00:07:04.006 1+0 records out 00:07:04.006 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417095 s, 9.8 MB/s 00:07:04.006 16:21:16 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:04.006 16:21:16 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:04.006 16:21:16 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:04.006 16:21:16 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:04.006 16:21:16 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:04.006 16:21:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.006 16:21:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.006 16:21:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:04.006 16:21:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.006 16:21:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:04.265 16:21:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:04.265 { 00:07:04.265 "nbd_device": "/dev/nbd0", 00:07:04.265 "bdev_name": "Malloc0" 00:07:04.265 }, 00:07:04.265 { 00:07:04.265 "nbd_device": "/dev/nbd1", 00:07:04.265 "bdev_name": "Malloc1" 00:07:04.265 } 00:07:04.265 ]' 00:07:04.265 16:21:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:04.265 { 00:07:04.265 "nbd_device": "/dev/nbd0", 00:07:04.265 "bdev_name": "Malloc0" 00:07:04.265 }, 00:07:04.265 { 00:07:04.265 "nbd_device": "/dev/nbd1", 00:07:04.265 "bdev_name": "Malloc1" 00:07:04.265 } 00:07:04.265 ]' 00:07:04.265 16:21:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:04.265 16:21:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:04.265 /dev/nbd1' 00:07:04.265 16:21:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:04.265 /dev/nbd1' 00:07:04.265 16:21:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:04.265 16:21:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:04.265 16:21:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:04.265 16:21:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:04.265 16:21:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:04.265 16:21:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:04.265 16:21:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.265 16:21:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:04.265 16:21:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:04.265 16:21:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:04.265 16:21:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:04.265 16:21:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:04.265 256+0 records in 00:07:04.265 256+0 records out 00:07:04.265 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00635704 s, 165 MB/s 00:07:04.265 16:21:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:04.265 16:21:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:04.265 256+0 records in 00:07:04.265 256+0 records out 00:07:04.265 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243879 s, 43.0 MB/s 00:07:04.265 16:21:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:04.265 16:21:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:04.265 256+0 records in 00:07:04.265 256+0 records out 00:07:04.265 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246781 s, 42.5 MB/s 00:07:04.265 16:21:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:04.265 16:21:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.265 16:21:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:04.265 16:21:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:04.265 16:21:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:04.265 16:21:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:04.265 16:21:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:04.265 16:21:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.265 16:21:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:04.265 16:21:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.265 16:21:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:04.265 16:21:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:04.265 16:21:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:04.265 16:21:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.265 16:21:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.265 16:21:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:04.265 16:21:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:04.265 16:21:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.265 16:21:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:04.524 16:21:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:04.524 16:21:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:04.524 16:21:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:04.524 16:21:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.524 16:21:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.524 16:21:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:04.524 16:21:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:04.524 16:21:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.524 16:21:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.524 16:21:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:04.783 16:21:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:04.783 16:21:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:04.783 16:21:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:04.783 16:21:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.783 16:21:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.783 16:21:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:04.783 16:21:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:04.783 16:21:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.783 16:21:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:04.783 16:21:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.783 16:21:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.042 16:21:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:05.042 16:21:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:05.042 16:21:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.042 16:21:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:05.042 16:21:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:05.042 16:21:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.042 16:21:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:05.042 16:21:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:05.042 16:21:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:05.042 16:21:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:05.042 16:21:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:05.042 16:21:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:05.042 16:21:18 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:05.607 16:21:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:06.544 [2024-11-05 16:21:19.583956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:06.801 [2024-11-05 16:21:19.703961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.801 [2024-11-05 16:21:19.703962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.058 [2024-11-05 16:21:19.907871] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:07.058 [2024-11-05 16:21:19.907955] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:08.444 16:21:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:08.444 spdk_app_start Round 1 00:07:08.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:08.444 16:21:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:08.444 16:21:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58564 /var/tmp/spdk-nbd.sock 00:07:08.444 16:21:21 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58564 ']' 00:07:08.444 16:21:21 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:08.444 16:21:21 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:08.444 16:21:21 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:08.444 16:21:21 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:08.444 16:21:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:08.718 16:21:21 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:08.718 16:21:21 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:07:08.718 16:21:21 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:08.976 Malloc0 00:07:08.976 16:21:21 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:09.234 Malloc1 00:07:09.234 16:21:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:09.234 16:21:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.234 16:21:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:09.234 16:21:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:09.234 16:21:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:09.234 16:21:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:09.234 16:21:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:09.234 16:21:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.234 16:21:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:09.234 16:21:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:09.234 16:21:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:09.234 16:21:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:09.234 16:21:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:09.234 16:21:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:09.234 16:21:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:09.234 16:21:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:09.493 /dev/nbd0 00:07:09.493 16:21:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:09.493 16:21:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:09.493 16:21:22 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:09.493 16:21:22 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:09.493 16:21:22 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:09.493 16:21:22 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:09.493 16:21:22 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:09.493 16:21:22 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:09.493 16:21:22 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:09.493 16:21:22 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:09.493 16:21:22 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:09.493 1+0 records in 00:07:09.493 1+0 records out 00:07:09.493 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224348 s, 18.3 MB/s 00:07:09.493 16:21:22 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:09.493 16:21:22 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:09.493 16:21:22 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:09.493 16:21:22 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:09.493 16:21:22 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:09.493 16:21:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:09.493 16:21:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:09.493 16:21:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:09.751 /dev/nbd1 00:07:09.751 16:21:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:09.751 16:21:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:09.751 16:21:22 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:07:09.751 16:21:22 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:09.751 16:21:22 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:09.751 16:21:22 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:09.752 16:21:22 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:07:09.752 16:21:22 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:09.752 16:21:22 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:09.752 16:21:22 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:09.752 16:21:22 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:09.752 1+0 records in 00:07:09.752 1+0 records out 00:07:09.752 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401585 s, 10.2 MB/s 00:07:09.752 16:21:22 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:09.752 16:21:22 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:09.752 16:21:22 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:09.752 16:21:22 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:09.752 16:21:22 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:09.752 16:21:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:09.752 16:21:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:09.752 16:21:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:09.752 16:21:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.752 16:21:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:10.010 16:21:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:10.010 { 00:07:10.010 "nbd_device": "/dev/nbd0", 00:07:10.010 "bdev_name": "Malloc0" 00:07:10.010 }, 00:07:10.010 { 00:07:10.010 "nbd_device": "/dev/nbd1", 00:07:10.010 "bdev_name": "Malloc1" 00:07:10.010 } 00:07:10.010 ]' 00:07:10.010 16:21:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:10.010 { 00:07:10.010 "nbd_device": "/dev/nbd0", 00:07:10.010 "bdev_name": "Malloc0" 00:07:10.010 }, 00:07:10.010 { 00:07:10.010 "nbd_device": "/dev/nbd1", 00:07:10.010 "bdev_name": "Malloc1" 00:07:10.010 } 00:07:10.010 ]' 00:07:10.010 16:21:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:10.010 16:21:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:10.010 /dev/nbd1' 00:07:10.010 16:21:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:10.010 /dev/nbd1' 00:07:10.010 16:21:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:10.010 16:21:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:10.010 16:21:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:10.010 16:21:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:10.010 16:21:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:10.010 16:21:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:10.010 16:21:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.010 16:21:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:10.010 16:21:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:10.010 16:21:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:10.010 16:21:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:10.010 16:21:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:10.010 256+0 records in 00:07:10.010 256+0 records out 00:07:10.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134658 s, 77.9 MB/s 00:07:10.010 16:21:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.010 16:21:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:10.010 256+0 records in 00:07:10.010 256+0 records out 00:07:10.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227238 s, 46.1 MB/s 00:07:10.010 16:21:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.010 16:21:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:10.010 256+0 records in 00:07:10.010 256+0 records out 00:07:10.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252782 s, 41.5 MB/s 00:07:10.010 16:21:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:10.010 16:21:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.010 16:21:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:10.010 16:21:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:10.010 16:21:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:10.010 16:21:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:10.010 16:21:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:10.010 16:21:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.011 16:21:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:10.011 16:21:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.011 16:21:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:10.011 16:21:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:10.011 16:21:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:10.011 16:21:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.011 16:21:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.011 16:21:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:10.011 16:21:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:10.011 16:21:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.011 16:21:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:10.269 16:21:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:10.269 16:21:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:10.269 16:21:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:10.269 16:21:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.269 16:21:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.269 16:21:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:10.269 16:21:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:10.269 16:21:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.269 16:21:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.269 16:21:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:10.526 16:21:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:10.527 16:21:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:10.527 16:21:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:10.527 16:21:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.527 16:21:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.527 16:21:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:10.527 16:21:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:10.527 16:21:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.527 16:21:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:10.527 16:21:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.527 16:21:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:10.784 16:21:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:10.784 16:21:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:10.784 16:21:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:10.784 16:21:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:10.784 16:21:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:10.784 16:21:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:10.784 16:21:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:10.784 16:21:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:10.784 16:21:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:10.784 16:21:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:10.784 16:21:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:10.784 16:21:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:10.784 16:21:23 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:11.349 16:21:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:12.724 [2024-11-05 16:21:25.375315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:12.724 [2024-11-05 16:21:25.494017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.724 [2024-11-05 16:21:25.494045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.724 [2024-11-05 16:21:25.693031] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:12.724 [2024-11-05 16:21:25.693242] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:14.629 16:21:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:14.629 16:21:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:14.629 spdk_app_start Round 2 00:07:14.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:14.629 16:21:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58564 /var/tmp/spdk-nbd.sock 00:07:14.629 16:21:27 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58564 ']' 00:07:14.629 16:21:27 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:14.629 16:21:27 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:14.629 16:21:27 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:14.629 16:21:27 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:14.629 16:21:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:14.629 16:21:27 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:14.629 16:21:27 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:07:14.629 16:21:27 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:14.629 Malloc0 00:07:14.888 16:21:27 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:15.147 Malloc1 00:07:15.147 16:21:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:15.147 16:21:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.147 16:21:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:15.147 16:21:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:15.147 16:21:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:15.147 16:21:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:15.147 16:21:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:15.147 16:21:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.147 16:21:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:15.147 16:21:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:15.147 16:21:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:15.147 16:21:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:15.147 16:21:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:15.147 16:21:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:15.147 16:21:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:15.147 16:21:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:15.410 /dev/nbd0 00:07:15.410 16:21:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:15.410 16:21:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:15.410 16:21:28 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:15.410 16:21:28 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:15.410 16:21:28 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:15.410 16:21:28 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:15.410 16:21:28 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:15.410 16:21:28 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:15.410 16:21:28 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:15.410 16:21:28 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:15.410 16:21:28 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:15.410 1+0 records in 00:07:15.410 1+0 records out 00:07:15.410 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000492496 s, 8.3 MB/s 00:07:15.410 16:21:28 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:15.410 16:21:28 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:15.410 16:21:28 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:15.410 16:21:28 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:15.410 16:21:28 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:15.410 16:21:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:15.410 16:21:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:15.410 16:21:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:15.670 /dev/nbd1 00:07:15.670 16:21:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:15.670 16:21:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:15.670 16:21:28 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:07:15.670 16:21:28 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:15.670 16:21:28 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:15.670 16:21:28 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:15.670 16:21:28 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:07:15.670 16:21:28 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:15.670 16:21:28 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:15.670 16:21:28 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:15.670 16:21:28 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:15.670 1+0 records in 00:07:15.670 1+0 records out 00:07:15.670 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286291 s, 14.3 MB/s 00:07:15.670 16:21:28 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:15.670 16:21:28 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:15.670 16:21:28 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:15.670 16:21:28 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:15.671 16:21:28 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:15.671 16:21:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:15.671 16:21:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:15.671 16:21:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:15.671 16:21:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.671 16:21:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:15.931 16:21:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:15.931 { 00:07:15.931 "nbd_device": "/dev/nbd0", 00:07:15.931 "bdev_name": "Malloc0" 00:07:15.931 }, 00:07:15.931 { 00:07:15.931 "nbd_device": "/dev/nbd1", 00:07:15.931 "bdev_name": "Malloc1" 00:07:15.931 } 00:07:15.931 ]' 00:07:15.931 16:21:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:15.931 16:21:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:15.931 { 00:07:15.931 "nbd_device": "/dev/nbd0", 00:07:15.931 "bdev_name": "Malloc0" 00:07:15.931 }, 00:07:15.931 { 00:07:15.931 "nbd_device": "/dev/nbd1", 00:07:15.931 "bdev_name": "Malloc1" 00:07:15.931 } 00:07:15.931 ]' 00:07:15.931 16:21:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:15.931 /dev/nbd1' 00:07:15.931 16:21:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:15.931 /dev/nbd1' 00:07:15.931 16:21:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:15.931 16:21:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:15.931 16:21:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:15.931 16:21:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:15.931 16:21:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:15.931 16:21:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:15.931 16:21:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:15.931 16:21:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:15.931 16:21:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:15.931 16:21:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:15.931 16:21:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:15.931 16:21:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:15.932 256+0 records in 00:07:15.932 256+0 records out 00:07:15.932 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131665 s, 79.6 MB/s 00:07:15.932 16:21:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:15.932 16:21:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:15.932 256+0 records in 00:07:15.932 256+0 records out 00:07:15.932 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0226856 s, 46.2 MB/s 00:07:15.932 16:21:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:15.932 16:21:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:15.932 256+0 records in 00:07:15.932 256+0 records out 00:07:15.932 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027452 s, 38.2 MB/s 00:07:15.932 16:21:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:15.932 16:21:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:15.932 16:21:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:15.932 16:21:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:15.932 16:21:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:15.932 16:21:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:15.932 16:21:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:15.932 16:21:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:15.932 16:21:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:15.932 16:21:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:15.932 16:21:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:15.932 16:21:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:15.932 16:21:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:15.932 16:21:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.932 16:21:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:15.932 16:21:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:15.932 16:21:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:15.932 16:21:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.932 16:21:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:16.192 16:21:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:16.192 16:21:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:16.192 16:21:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:16.192 16:21:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.192 16:21:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.192 16:21:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:16.192 16:21:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:16.192 16:21:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.192 16:21:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.192 16:21:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:16.452 16:21:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:16.452 16:21:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:16.452 16:21:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:16.452 16:21:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.452 16:21:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.452 16:21:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:16.452 16:21:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:16.452 16:21:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.452 16:21:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:16.452 16:21:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.452 16:21:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:16.711 16:21:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:16.712 16:21:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:16.712 16:21:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:16.712 16:21:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:16.712 16:21:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:16.712 16:21:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:16.712 16:21:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:16.712 16:21:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:16.712 16:21:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:16.712 16:21:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:16.712 16:21:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:16.712 16:21:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:16.712 16:21:29 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:17.281 16:21:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:18.661 [2024-11-05 16:21:31.311313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:18.661 [2024-11-05 16:21:31.435627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.661 [2024-11-05 16:21:31.435631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.661 [2024-11-05 16:21:31.646729] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:18.661 [2024-11-05 16:21:31.646856] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:20.048 16:21:33 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58564 /var/tmp/spdk-nbd.sock 00:07:20.048 16:21:33 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58564 ']' 00:07:20.048 16:21:33 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:20.048 16:21:33 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:20.048 16:21:33 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:20.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:20.048 16:21:33 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:20.048 16:21:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:20.306 16:21:33 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:20.306 16:21:33 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:07:20.306 16:21:33 event.app_repeat -- event/event.sh@39 -- # killprocess 58564 00:07:20.306 16:21:33 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 58564 ']' 00:07:20.306 16:21:33 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 58564 00:07:20.306 16:21:33 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:07:20.306 16:21:33 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:20.306 16:21:33 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58564 00:07:20.306 killing process with pid 58564 00:07:20.306 16:21:33 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:20.306 16:21:33 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:20.306 16:21:33 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58564' 00:07:20.306 16:21:33 event.app_repeat -- common/autotest_common.sh@971 -- # kill 58564 00:07:20.306 16:21:33 event.app_repeat -- common/autotest_common.sh@976 -- # wait 58564 00:07:21.684 spdk_app_start is called in Round 0. 00:07:21.684 Shutdown signal received, stop current app iteration 00:07:21.684 Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 reinitialization... 00:07:21.684 spdk_app_start is called in Round 1. 00:07:21.684 Shutdown signal received, stop current app iteration 00:07:21.684 Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 reinitialization... 00:07:21.684 spdk_app_start is called in Round 2. 00:07:21.684 Shutdown signal received, stop current app iteration 00:07:21.684 Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 reinitialization... 00:07:21.684 spdk_app_start is called in Round 3. 00:07:21.684 Shutdown signal received, stop current app iteration 00:07:21.684 16:21:34 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:21.684 16:21:34 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:21.684 00:07:21.684 real 0m19.626s 00:07:21.684 user 0m42.055s 00:07:21.684 sys 0m2.886s 00:07:21.684 16:21:34 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:21.684 ************************************ 00:07:21.684 END TEST app_repeat 00:07:21.684 ************************************ 00:07:21.684 16:21:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:21.684 16:21:34 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:21.684 16:21:34 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:21.684 16:21:34 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:21.684 16:21:34 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:21.684 16:21:34 event -- common/autotest_common.sh@10 -- # set +x 00:07:21.684 ************************************ 00:07:21.684 START TEST cpu_locks 00:07:21.684 ************************************ 00:07:21.684 16:21:34 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:21.684 * Looking for test storage... 00:07:21.684 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:21.684 16:21:34 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:21.684 16:21:34 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:07:21.684 16:21:34 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:21.684 16:21:34 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:21.684 16:21:34 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:21.684 16:21:34 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:21.684 16:21:34 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:21.684 16:21:34 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:21.684 16:21:34 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:21.684 16:21:34 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:21.684 16:21:34 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:21.684 16:21:34 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:21.684 16:21:34 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:21.684 16:21:34 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:21.684 16:21:34 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:21.684 16:21:34 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:21.684 16:21:34 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:21.684 16:21:34 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:21.684 16:21:34 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:21.684 16:21:34 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:21.684 16:21:34 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:21.684 16:21:34 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.684 16:21:34 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:21.684 16:21:34 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:21.684 16:21:34 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:21.684 16:21:34 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:21.684 16:21:34 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.684 16:21:34 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:21.684 16:21:34 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:21.684 16:21:34 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:21.684 16:21:34 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:21.684 16:21:34 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:21.684 16:21:34 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:21.684 16:21:34 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:21.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.684 --rc genhtml_branch_coverage=1 00:07:21.684 --rc genhtml_function_coverage=1 00:07:21.684 --rc genhtml_legend=1 00:07:21.684 --rc geninfo_all_blocks=1 00:07:21.684 --rc geninfo_unexecuted_blocks=1 00:07:21.684 00:07:21.684 ' 00:07:21.684 16:21:34 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:21.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.684 --rc genhtml_branch_coverage=1 00:07:21.684 --rc genhtml_function_coverage=1 00:07:21.684 --rc genhtml_legend=1 00:07:21.684 --rc geninfo_all_blocks=1 00:07:21.684 --rc geninfo_unexecuted_blocks=1 00:07:21.684 00:07:21.684 ' 00:07:21.684 16:21:34 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:21.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.684 --rc genhtml_branch_coverage=1 00:07:21.684 --rc genhtml_function_coverage=1 00:07:21.684 --rc genhtml_legend=1 00:07:21.684 --rc geninfo_all_blocks=1 00:07:21.684 --rc geninfo_unexecuted_blocks=1 00:07:21.684 00:07:21.684 ' 00:07:21.684 16:21:34 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:21.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.684 --rc genhtml_branch_coverage=1 00:07:21.684 --rc genhtml_function_coverage=1 00:07:21.684 --rc genhtml_legend=1 00:07:21.684 --rc geninfo_all_blocks=1 00:07:21.684 --rc geninfo_unexecuted_blocks=1 00:07:21.684 00:07:21.684 ' 00:07:21.684 16:21:34 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:21.684 16:21:34 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:21.684 16:21:34 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:21.684 16:21:34 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:21.684 16:21:34 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:21.685 16:21:34 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:21.685 16:21:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:21.685 ************************************ 00:07:21.685 START TEST default_locks 00:07:21.685 ************************************ 00:07:21.685 16:21:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:07:21.685 16:21:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59013 00:07:21.685 16:21:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:21.685 16:21:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59013 00:07:21.685 16:21:34 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 59013 ']' 00:07:21.685 16:21:34 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.685 16:21:34 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:21.685 16:21:34 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.685 16:21:34 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:21.685 16:21:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:21.944 [2024-11-05 16:21:34.855762] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:07:21.944 [2024-11-05 16:21:34.855987] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59013 ] 00:07:22.203 [2024-11-05 16:21:35.040972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.203 [2024-11-05 16:21:35.172575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.144 16:21:36 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:23.144 16:21:36 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:07:23.144 16:21:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59013 00:07:23.144 16:21:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59013 00:07:23.144 16:21:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:23.405 16:21:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59013 00:07:23.405 16:21:36 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 59013 ']' 00:07:23.405 16:21:36 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 59013 00:07:23.405 16:21:36 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:07:23.405 16:21:36 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:23.405 16:21:36 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59013 00:07:23.672 killing process with pid 59013 00:07:23.672 16:21:36 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:23.672 16:21:36 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:23.672 16:21:36 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59013' 00:07:23.672 16:21:36 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 59013 00:07:23.672 16:21:36 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 59013 00:07:26.216 16:21:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59013 00:07:26.216 16:21:38 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:26.216 16:21:38 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59013 00:07:26.216 16:21:38 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:26.216 16:21:38 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:26.216 16:21:38 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:26.216 16:21:38 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:26.216 16:21:38 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 59013 00:07:26.216 16:21:38 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 59013 ']' 00:07:26.216 16:21:38 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.216 16:21:38 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:26.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.216 ERROR: process (pid: 59013) is no longer running 00:07:26.216 16:21:38 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.216 16:21:38 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:26.216 16:21:38 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:26.216 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59013) - No such process 00:07:26.216 16:21:39 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:26.216 16:21:39 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:07:26.217 16:21:39 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:26.217 16:21:39 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:26.217 16:21:39 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:26.217 16:21:39 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:26.217 16:21:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:26.217 16:21:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:26.217 16:21:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:26.217 16:21:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:26.217 00:07:26.217 real 0m4.260s 00:07:26.217 user 0m4.225s 00:07:26.217 sys 0m0.676s 00:07:26.217 16:21:39 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:26.217 16:21:39 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:26.217 ************************************ 00:07:26.217 END TEST default_locks 00:07:26.217 ************************************ 00:07:26.217 16:21:39 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:26.217 16:21:39 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:26.217 16:21:39 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:26.217 16:21:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:26.217 ************************************ 00:07:26.217 START TEST default_locks_via_rpc 00:07:26.217 ************************************ 00:07:26.217 16:21:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:07:26.217 16:21:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59088 00:07:26.217 16:21:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:26.217 16:21:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59088 00:07:26.217 16:21:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59088 ']' 00:07:26.217 16:21:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.217 16:21:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:26.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.217 16:21:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.217 16:21:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:26.217 16:21:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.217 [2024-11-05 16:21:39.169767] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:07:26.217 [2024-11-05 16:21:39.169887] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59088 ] 00:07:26.476 [2024-11-05 16:21:39.335958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.476 [2024-11-05 16:21:39.461619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.452 16:21:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:27.452 16:21:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:27.452 16:21:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:27.452 16:21:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.452 16:21:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.452 16:21:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.452 16:21:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:27.452 16:21:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:27.452 16:21:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:27.452 16:21:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:27.452 16:21:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:27.452 16:21:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.452 16:21:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.452 16:21:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.452 16:21:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59088 00:07:27.452 16:21:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59088 00:07:27.452 16:21:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:27.711 16:21:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59088 00:07:27.711 16:21:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 59088 ']' 00:07:27.711 16:21:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 59088 00:07:27.711 16:21:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:07:27.711 16:21:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:27.711 16:21:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59088 00:07:27.970 16:21:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:27.970 16:21:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:27.970 16:21:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59088' 00:07:27.970 killing process with pid 59088 00:07:27.970 16:21:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 59088 00:07:27.970 16:21:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 59088 00:07:30.508 00:07:30.508 real 0m4.259s 00:07:30.508 user 0m4.245s 00:07:30.508 sys 0m0.639s 00:07:30.508 16:21:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:30.508 16:21:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.508 ************************************ 00:07:30.508 END TEST default_locks_via_rpc 00:07:30.508 ************************************ 00:07:30.508 16:21:43 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:30.508 16:21:43 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:30.508 16:21:43 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:30.508 16:21:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:30.508 ************************************ 00:07:30.508 START TEST non_locking_app_on_locked_coremask 00:07:30.509 ************************************ 00:07:30.509 16:21:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:07:30.509 16:21:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59170 00:07:30.509 16:21:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:30.509 16:21:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59170 /var/tmp/spdk.sock 00:07:30.509 16:21:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59170 ']' 00:07:30.509 16:21:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.509 16:21:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:30.509 16:21:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.509 16:21:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:30.509 16:21:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:30.509 [2024-11-05 16:21:43.486784] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:07:30.509 [2024-11-05 16:21:43.487014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59170 ] 00:07:30.768 [2024-11-05 16:21:43.647225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.768 [2024-11-05 16:21:43.770024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.705 16:21:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:31.705 16:21:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:31.705 16:21:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:31.705 16:21:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59186 00:07:31.705 16:21:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59186 /var/tmp/spdk2.sock 00:07:31.705 16:21:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59186 ']' 00:07:31.705 16:21:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:31.705 16:21:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:31.705 16:21:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:31.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:31.705 16:21:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:31.705 16:21:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:31.705 [2024-11-05 16:21:44.773838] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:07:31.705 [2024-11-05 16:21:44.774100] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59186 ] 00:07:31.964 [2024-11-05 16:21:44.961375] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:31.964 [2024-11-05 16:21:44.961451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.223 [2024-11-05 16:21:45.205766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.823 16:21:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:34.823 16:21:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:34.823 16:21:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59170 00:07:34.823 16:21:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:34.823 16:21:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59170 00:07:35.391 16:21:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59170 00:07:35.391 16:21:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59170 ']' 00:07:35.391 16:21:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59170 00:07:35.391 16:21:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:35.391 16:21:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:35.391 16:21:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59170 00:07:35.391 killing process with pid 59170 00:07:35.391 16:21:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:35.391 16:21:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:35.391 16:21:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59170' 00:07:35.391 16:21:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59170 00:07:35.391 16:21:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59170 00:07:40.660 16:21:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59186 00:07:40.660 16:21:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59186 ']' 00:07:40.660 16:21:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59186 00:07:40.660 16:21:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:40.660 16:21:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:40.661 16:21:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59186 00:07:40.661 killing process with pid 59186 00:07:40.661 16:21:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:40.661 16:21:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:40.661 16:21:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59186' 00:07:40.661 16:21:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59186 00:07:40.661 16:21:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59186 00:07:43.195 ************************************ 00:07:43.196 END TEST non_locking_app_on_locked_coremask 00:07:43.196 ************************************ 00:07:43.196 00:07:43.196 real 0m12.405s 00:07:43.196 user 0m12.661s 00:07:43.196 sys 0m1.465s 00:07:43.196 16:21:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:43.196 16:21:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:43.196 16:21:55 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:43.196 16:21:55 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:43.196 16:21:55 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:43.196 16:21:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:43.196 ************************************ 00:07:43.196 START TEST locking_app_on_unlocked_coremask 00:07:43.196 ************************************ 00:07:43.196 16:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:07:43.196 16:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59346 00:07:43.196 16:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:43.196 16:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59346 /var/tmp/spdk.sock 00:07:43.196 16:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59346 ']' 00:07:43.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.196 16:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.196 16:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:43.196 16:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.196 16:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:43.196 16:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:43.196 [2024-11-05 16:21:55.967369] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:07:43.196 [2024-11-05 16:21:55.967510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59346 ] 00:07:43.196 [2024-11-05 16:21:56.127732] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:43.196 [2024-11-05 16:21:56.127944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.196 [2024-11-05 16:21:56.265659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.132 16:21:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:44.132 16:21:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:44.132 16:21:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59362 00:07:44.132 16:21:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:44.132 16:21:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59362 /var/tmp/spdk2.sock 00:07:44.132 16:21:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59362 ']' 00:07:44.132 16:21:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:44.132 16:21:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:44.132 16:21:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:44.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:44.132 16:21:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:44.132 16:21:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:44.391 [2024-11-05 16:21:57.297660] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:07:44.391 [2024-11-05 16:21:57.297917] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59362 ] 00:07:44.391 [2024-11-05 16:21:57.469318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.650 [2024-11-05 16:21:57.728208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.245 16:21:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:47.245 16:21:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:47.245 16:21:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59362 00:07:47.245 16:21:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59362 00:07:47.245 16:21:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:47.245 16:22:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59346 00:07:47.245 16:22:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59346 ']' 00:07:47.245 16:22:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59346 00:07:47.245 16:22:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:47.245 16:22:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:47.245 16:22:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59346 00:07:47.245 killing process with pid 59346 00:07:47.245 16:22:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:47.245 16:22:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:47.245 16:22:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59346' 00:07:47.245 16:22:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59346 00:07:47.245 16:22:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59346 00:07:52.607 16:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59362 00:07:52.607 16:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59362 ']' 00:07:52.607 16:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59362 00:07:52.607 16:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:52.607 16:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:52.607 16:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59362 00:07:52.607 killing process with pid 59362 00:07:52.607 16:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:52.607 16:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:52.607 16:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59362' 00:07:52.607 16:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59362 00:07:52.607 16:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59362 00:07:55.146 ************************************ 00:07:55.146 END TEST locking_app_on_unlocked_coremask 00:07:55.146 ************************************ 00:07:55.146 00:07:55.146 real 0m11.981s 00:07:55.146 user 0m12.270s 00:07:55.146 sys 0m1.208s 00:07:55.146 16:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:55.146 16:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:55.146 16:22:07 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:55.146 16:22:07 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:55.146 16:22:07 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:55.146 16:22:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:55.146 ************************************ 00:07:55.146 START TEST locking_app_on_locked_coremask 00:07:55.146 ************************************ 00:07:55.146 16:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:07:55.146 16:22:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59516 00:07:55.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.146 16:22:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59516 /var/tmp/spdk.sock 00:07:55.146 16:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59516 ']' 00:07:55.146 16:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.146 16:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:55.146 16:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.146 16:22:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:55.146 16:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:55.146 16:22:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:55.146 [2024-11-05 16:22:08.005650] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:07:55.146 [2024-11-05 16:22:08.005865] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59516 ] 00:07:55.146 [2024-11-05 16:22:08.179846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.403 [2024-11-05 16:22:08.309002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.341 16:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:56.341 16:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:56.341 16:22:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59532 00:07:56.341 16:22:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59532 /var/tmp/spdk2.sock 00:07:56.341 16:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:56.341 16:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59532 /var/tmp/spdk2.sock 00:07:56.341 16:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:56.341 16:22:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:56.341 16:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:56.341 16:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:56.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:56.341 16:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:56.341 16:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59532 /var/tmp/spdk2.sock 00:07:56.341 16:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59532 ']' 00:07:56.341 16:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:56.341 16:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:56.341 16:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:56.341 16:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:56.341 16:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:56.341 [2024-11-05 16:22:09.325385] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:07:56.341 [2024-11-05 16:22:09.325638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59532 ] 00:07:56.600 [2024-11-05 16:22:09.501801] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59516 has claimed it. 00:07:56.600 [2024-11-05 16:22:09.501879] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:57.169 ERROR: process (pid: 59532) is no longer running 00:07:57.169 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59532) - No such process 00:07:57.169 16:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:57.169 16:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:07:57.169 16:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:57.169 16:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:57.169 16:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:57.169 16:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:57.169 16:22:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59516 00:07:57.169 16:22:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59516 00:07:57.169 16:22:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:57.428 16:22:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59516 00:07:57.428 16:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59516 ']' 00:07:57.428 16:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59516 00:07:57.428 16:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:57.428 16:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:57.428 16:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59516 00:07:57.428 16:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:57.428 16:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:57.428 16:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59516' 00:07:57.428 killing process with pid 59516 00:07:57.428 16:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59516 00:07:57.428 16:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59516 00:08:00.715 00:08:00.715 real 0m5.179s 00:08:00.715 user 0m5.414s 00:08:00.715 sys 0m0.851s 00:08:00.715 16:22:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:00.715 16:22:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:00.715 ************************************ 00:08:00.715 END TEST locking_app_on_locked_coremask 00:08:00.715 ************************************ 00:08:00.715 16:22:13 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:00.715 16:22:13 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:00.715 16:22:13 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:00.715 16:22:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:00.715 ************************************ 00:08:00.715 START TEST locking_overlapped_coremask 00:08:00.715 ************************************ 00:08:00.715 16:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:08:00.715 16:22:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59607 00:08:00.715 16:22:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:08:00.715 16:22:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59607 /var/tmp/spdk.sock 00:08:00.715 16:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59607 ']' 00:08:00.715 16:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.715 16:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:00.715 16:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.715 16:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:00.715 16:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:00.715 [2024-11-05 16:22:13.256473] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:08:00.715 [2024-11-05 16:22:13.256806] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59607 ] 00:08:00.715 [2024-11-05 16:22:13.423817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:00.715 [2024-11-05 16:22:13.563499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.715 [2024-11-05 16:22:13.563660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.715 [2024-11-05 16:22:13.563710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.653 16:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:01.653 16:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:01.653 16:22:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:01.653 16:22:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59626 00:08:01.653 16:22:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59626 /var/tmp/spdk2.sock 00:08:01.653 16:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:01.653 16:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59626 /var/tmp/spdk2.sock 00:08:01.653 16:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:01.653 16:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:01.653 16:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:01.653 16:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:01.653 16:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59626 /var/tmp/spdk2.sock 00:08:01.653 16:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59626 ']' 00:08:01.653 16:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:01.653 16:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:01.653 16:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:01.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:01.653 16:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:01.653 16:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:01.653 [2024-11-05 16:22:14.624076] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:08:01.653 [2024-11-05 16:22:14.624326] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59626 ] 00:08:01.911 [2024-11-05 16:22:14.823772] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59607 has claimed it. 00:08:01.912 [2024-11-05 16:22:14.823861] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:02.479 ERROR: process (pid: 59626) is no longer running 00:08:02.479 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59626) - No such process 00:08:02.479 16:22:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:02.479 16:22:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:08:02.479 16:22:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:02.479 16:22:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:02.479 16:22:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:02.479 16:22:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:02.479 16:22:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:02.479 16:22:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:02.479 16:22:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:02.479 16:22:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:02.479 16:22:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59607 00:08:02.479 16:22:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 59607 ']' 00:08:02.479 16:22:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 59607 00:08:02.479 16:22:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:08:02.479 16:22:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:02.479 16:22:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59607 00:08:02.479 killing process with pid 59607 00:08:02.479 16:22:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:02.479 16:22:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:02.479 16:22:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59607' 00:08:02.479 16:22:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 59607 00:08:02.479 16:22:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 59607 00:08:05.767 00:08:05.767 real 0m5.048s 00:08:05.767 user 0m13.878s 00:08:05.767 sys 0m0.589s 00:08:05.767 16:22:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:05.767 16:22:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:05.767 ************************************ 00:08:05.767 END TEST locking_overlapped_coremask 00:08:05.767 ************************************ 00:08:05.767 16:22:18 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:05.767 16:22:18 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:05.767 16:22:18 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:05.767 16:22:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:05.767 ************************************ 00:08:05.767 START TEST locking_overlapped_coremask_via_rpc 00:08:05.767 ************************************ 00:08:05.767 16:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:08:05.767 16:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:05.767 16:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59701 00:08:05.767 16:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59701 /var/tmp/spdk.sock 00:08:05.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.767 16:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59701 ']' 00:08:05.767 16:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.767 16:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:05.767 16:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.767 16:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:05.767 16:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.767 [2024-11-05 16:22:18.364996] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:08:05.767 [2024-11-05 16:22:18.365232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59701 ] 00:08:05.767 [2024-11-05 16:22:18.548648] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:05.767 [2024-11-05 16:22:18.548719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:05.767 [2024-11-05 16:22:18.691012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.767 [2024-11-05 16:22:18.691109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.767 [2024-11-05 16:22:18.691159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:06.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:06.710 16:22:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:06.710 16:22:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:06.710 16:22:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59719 00:08:06.710 16:22:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59719 /var/tmp/spdk2.sock 00:08:06.710 16:22:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59719 ']' 00:08:06.710 16:22:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:06.710 16:22:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:06.710 16:22:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:06.710 16:22:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:06.710 16:22:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.710 16:22:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:06.710 [2024-11-05 16:22:19.785096] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:08:06.710 [2024-11-05 16:22:19.785228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59719 ] 00:08:06.969 [2024-11-05 16:22:19.964657] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:06.969 [2024-11-05 16:22:19.964737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:07.227 [2024-11-05 16:22:20.224889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:07.227 [2024-11-05 16:22:20.224980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:07.227 [2024-11-05 16:22:20.225020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:09.761 16:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:09.761 16:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:09.761 16:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:09.761 16:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.761 16:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.761 16:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.761 16:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:09.761 16:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:09.761 16:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:09.761 16:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:09.761 16:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:09.761 16:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:09.761 16:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:09.761 16:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:09.761 16:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.761 16:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.761 [2024-11-05 16:22:22.443853] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59701 has claimed it. 00:08:09.761 request: 00:08:09.761 { 00:08:09.761 "method": "framework_enable_cpumask_locks", 00:08:09.761 "req_id": 1 00:08:09.761 } 00:08:09.761 Got JSON-RPC error response 00:08:09.761 response: 00:08:09.761 { 00:08:09.761 "code": -32603, 00:08:09.761 "message": "Failed to claim CPU core: 2" 00:08:09.761 } 00:08:09.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.761 16:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:09.761 16:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:09.761 16:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:09.761 16:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:09.761 16:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:09.761 16:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59701 /var/tmp/spdk.sock 00:08:09.761 16:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59701 ']' 00:08:09.761 16:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.761 16:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:09.761 16:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.761 16:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:09.762 16:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:09.762 16:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:09.762 16:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:09.762 16:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59719 /var/tmp/spdk2.sock 00:08:09.762 16:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59719 ']' 00:08:09.762 16:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:09.762 16:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:09.762 16:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:09.762 16:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:09.762 16:22:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.020 ************************************ 00:08:10.020 END TEST locking_overlapped_coremask_via_rpc 00:08:10.020 ************************************ 00:08:10.020 16:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:10.020 16:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:10.021 16:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:10.021 16:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:10.021 16:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:10.021 16:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:10.021 00:08:10.021 real 0m4.750s 00:08:10.021 user 0m1.568s 00:08:10.021 sys 0m0.223s 00:08:10.021 16:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:10.021 16:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.021 16:22:23 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:10.021 16:22:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59701 ]] 00:08:10.021 16:22:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59701 00:08:10.021 16:22:23 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59701 ']' 00:08:10.021 16:22:23 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59701 00:08:10.021 16:22:23 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:08:10.021 16:22:23 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:10.021 16:22:23 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59701 00:08:10.021 16:22:23 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:10.021 16:22:23 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:10.021 16:22:23 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59701' 00:08:10.021 killing process with pid 59701 00:08:10.021 16:22:23 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59701 00:08:10.021 16:22:23 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59701 00:08:13.304 16:22:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59719 ]] 00:08:13.304 16:22:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59719 00:08:13.304 16:22:25 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59719 ']' 00:08:13.304 16:22:25 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59719 00:08:13.304 16:22:25 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:08:13.304 16:22:25 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:13.304 16:22:25 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59719 00:08:13.304 16:22:25 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:08:13.304 16:22:25 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:08:13.304 16:22:25 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59719' 00:08:13.304 killing process with pid 59719 00:08:13.304 16:22:25 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59719 00:08:13.304 16:22:25 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59719 00:08:15.240 16:22:28 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:15.240 16:22:28 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:15.240 16:22:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59701 ]] 00:08:15.240 16:22:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59701 00:08:15.240 16:22:28 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59701 ']' 00:08:15.240 Process with pid 59701 is not found 00:08:15.241 16:22:28 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59701 00:08:15.241 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59701) - No such process 00:08:15.241 16:22:28 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59701 is not found' 00:08:15.241 16:22:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59719 ]] 00:08:15.241 16:22:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59719 00:08:15.241 16:22:28 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59719 ']' 00:08:15.241 16:22:28 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59719 00:08:15.241 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59719) - No such process 00:08:15.241 Process with pid 59719 is not found 00:08:15.241 16:22:28 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59719 is not found' 00:08:15.241 16:22:28 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:15.499 00:08:15.499 real 0m53.813s 00:08:15.499 user 1m33.128s 00:08:15.499 sys 0m6.868s 00:08:15.499 16:22:28 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:15.499 16:22:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:15.499 ************************************ 00:08:15.499 END TEST cpu_locks 00:08:15.499 ************************************ 00:08:15.499 00:08:15.500 real 1m25.827s 00:08:15.500 user 2m37.030s 00:08:15.500 sys 0m10.905s 00:08:15.500 16:22:28 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:15.500 16:22:28 event -- common/autotest_common.sh@10 -- # set +x 00:08:15.500 ************************************ 00:08:15.500 END TEST event 00:08:15.500 ************************************ 00:08:15.500 16:22:28 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:15.500 16:22:28 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:15.500 16:22:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:15.500 16:22:28 -- common/autotest_common.sh@10 -- # set +x 00:08:15.500 ************************************ 00:08:15.500 START TEST thread 00:08:15.500 ************************************ 00:08:15.500 16:22:28 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:15.500 * Looking for test storage... 00:08:15.500 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:15.500 16:22:28 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:15.500 16:22:28 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:15.500 16:22:28 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:08:15.759 16:22:28 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:15.759 16:22:28 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:15.759 16:22:28 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:15.759 16:22:28 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:15.759 16:22:28 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:15.759 16:22:28 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:15.759 16:22:28 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:15.759 16:22:28 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:15.759 16:22:28 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:15.759 16:22:28 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:15.759 16:22:28 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:15.759 16:22:28 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:15.759 16:22:28 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:15.759 16:22:28 thread -- scripts/common.sh@345 -- # : 1 00:08:15.759 16:22:28 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:15.759 16:22:28 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:15.759 16:22:28 thread -- scripts/common.sh@365 -- # decimal 1 00:08:15.759 16:22:28 thread -- scripts/common.sh@353 -- # local d=1 00:08:15.759 16:22:28 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:15.759 16:22:28 thread -- scripts/common.sh@355 -- # echo 1 00:08:15.759 16:22:28 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:15.759 16:22:28 thread -- scripts/common.sh@366 -- # decimal 2 00:08:15.759 16:22:28 thread -- scripts/common.sh@353 -- # local d=2 00:08:15.759 16:22:28 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:15.759 16:22:28 thread -- scripts/common.sh@355 -- # echo 2 00:08:15.759 16:22:28 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:15.759 16:22:28 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:15.759 16:22:28 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:15.759 16:22:28 thread -- scripts/common.sh@368 -- # return 0 00:08:15.759 16:22:28 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:15.759 16:22:28 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:15.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.759 --rc genhtml_branch_coverage=1 00:08:15.759 --rc genhtml_function_coverage=1 00:08:15.759 --rc genhtml_legend=1 00:08:15.759 --rc geninfo_all_blocks=1 00:08:15.759 --rc geninfo_unexecuted_blocks=1 00:08:15.759 00:08:15.759 ' 00:08:15.759 16:22:28 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:15.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.759 --rc genhtml_branch_coverage=1 00:08:15.759 --rc genhtml_function_coverage=1 00:08:15.759 --rc genhtml_legend=1 00:08:15.759 --rc geninfo_all_blocks=1 00:08:15.759 --rc geninfo_unexecuted_blocks=1 00:08:15.759 00:08:15.759 ' 00:08:15.759 16:22:28 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:15.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.759 --rc genhtml_branch_coverage=1 00:08:15.759 --rc genhtml_function_coverage=1 00:08:15.759 --rc genhtml_legend=1 00:08:15.759 --rc geninfo_all_blocks=1 00:08:15.759 --rc geninfo_unexecuted_blocks=1 00:08:15.759 00:08:15.759 ' 00:08:15.759 16:22:28 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:15.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.759 --rc genhtml_branch_coverage=1 00:08:15.759 --rc genhtml_function_coverage=1 00:08:15.759 --rc genhtml_legend=1 00:08:15.759 --rc geninfo_all_blocks=1 00:08:15.759 --rc geninfo_unexecuted_blocks=1 00:08:15.759 00:08:15.759 ' 00:08:15.759 16:22:28 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:15.759 16:22:28 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:08:15.759 16:22:28 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:15.759 16:22:28 thread -- common/autotest_common.sh@10 -- # set +x 00:08:15.759 ************************************ 00:08:15.759 START TEST thread_poller_perf 00:08:15.759 ************************************ 00:08:15.759 16:22:28 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:15.759 [2024-11-05 16:22:28.692882] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:08:15.759 [2024-11-05 16:22:28.693156] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59923 ] 00:08:16.018 [2024-11-05 16:22:28.877360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.018 [2024-11-05 16:22:28.994210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.018 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:17.390 [2024-11-05T16:22:30.478Z] ====================================== 00:08:17.390 [2024-11-05T16:22:30.478Z] busy:2297964776 (cyc) 00:08:17.390 [2024-11-05T16:22:30.478Z] total_run_count: 339000 00:08:17.390 [2024-11-05T16:22:30.478Z] tsc_hz: 2290000000 (cyc) 00:08:17.390 [2024-11-05T16:22:30.478Z] ====================================== 00:08:17.390 [2024-11-05T16:22:30.478Z] poller_cost: 6778 (cyc), 2959 (nsec) 00:08:17.390 00:08:17.390 real 0m1.611s 00:08:17.390 user 0m1.401s 00:08:17.390 sys 0m0.101s 00:08:17.390 ************************************ 00:08:17.390 END TEST thread_poller_perf 00:08:17.390 ************************************ 00:08:17.390 16:22:30 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:17.390 16:22:30 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:17.391 16:22:30 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:17.391 16:22:30 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:08:17.391 16:22:30 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:17.391 16:22:30 thread -- common/autotest_common.sh@10 -- # set +x 00:08:17.391 ************************************ 00:08:17.391 START TEST thread_poller_perf 00:08:17.391 ************************************ 00:08:17.391 16:22:30 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:17.391 [2024-11-05 16:22:30.323082] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:08:17.391 [2024-11-05 16:22:30.323235] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59959 ] 00:08:17.650 [2024-11-05 16:22:30.500792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.650 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:17.650 [2024-11-05 16:22:30.665687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.057 [2024-11-05T16:22:32.145Z] ====================================== 00:08:19.057 [2024-11-05T16:22:32.145Z] busy:2294268384 (cyc) 00:08:19.057 [2024-11-05T16:22:32.145Z] total_run_count: 4005000 00:08:19.057 [2024-11-05T16:22:32.145Z] tsc_hz: 2290000000 (cyc) 00:08:19.057 [2024-11-05T16:22:32.145Z] ====================================== 00:08:19.057 [2024-11-05T16:22:32.145Z] poller_cost: 572 (cyc), 249 (nsec) 00:08:19.057 00:08:19.057 real 0m1.686s 00:08:19.057 user 0m1.461s 00:08:19.057 sys 0m0.114s 00:08:19.057 16:22:31 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:19.057 ************************************ 00:08:19.057 END TEST thread_poller_perf 00:08:19.057 ************************************ 00:08:19.057 16:22:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:19.057 16:22:32 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:19.057 ************************************ 00:08:19.057 END TEST thread 00:08:19.057 ************************************ 00:08:19.057 00:08:19.057 real 0m3.576s 00:08:19.057 user 0m2.982s 00:08:19.057 sys 0m0.386s 00:08:19.057 16:22:32 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:19.057 16:22:32 thread -- common/autotest_common.sh@10 -- # set +x 00:08:19.057 16:22:32 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:19.057 16:22:32 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:19.057 16:22:32 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:19.057 16:22:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:19.057 16:22:32 -- common/autotest_common.sh@10 -- # set +x 00:08:19.057 ************************************ 00:08:19.057 START TEST app_cmdline 00:08:19.057 ************************************ 00:08:19.057 16:22:32 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:19.317 * Looking for test storage... 00:08:19.317 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:19.317 16:22:32 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:19.317 16:22:32 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:08:19.317 16:22:32 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:19.317 16:22:32 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:19.317 16:22:32 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:19.317 16:22:32 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:19.317 16:22:32 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:19.317 16:22:32 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:19.317 16:22:32 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:19.317 16:22:32 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:19.317 16:22:32 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:19.317 16:22:32 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:19.317 16:22:32 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:19.317 16:22:32 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:19.317 16:22:32 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:19.317 16:22:32 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:19.317 16:22:32 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:19.317 16:22:32 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:19.317 16:22:32 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:19.317 16:22:32 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:19.317 16:22:32 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:19.317 16:22:32 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:19.317 16:22:32 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:19.317 16:22:32 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:19.317 16:22:32 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:19.317 16:22:32 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:19.317 16:22:32 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:19.317 16:22:32 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:19.317 16:22:32 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:19.317 16:22:32 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:19.317 16:22:32 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:19.317 16:22:32 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:19.317 16:22:32 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:19.317 16:22:32 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:19.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.317 --rc genhtml_branch_coverage=1 00:08:19.317 --rc genhtml_function_coverage=1 00:08:19.317 --rc genhtml_legend=1 00:08:19.317 --rc geninfo_all_blocks=1 00:08:19.317 --rc geninfo_unexecuted_blocks=1 00:08:19.317 00:08:19.317 ' 00:08:19.317 16:22:32 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:19.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.317 --rc genhtml_branch_coverage=1 00:08:19.317 --rc genhtml_function_coverage=1 00:08:19.317 --rc genhtml_legend=1 00:08:19.317 --rc geninfo_all_blocks=1 00:08:19.317 --rc geninfo_unexecuted_blocks=1 00:08:19.317 00:08:19.317 ' 00:08:19.317 16:22:32 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:19.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.317 --rc genhtml_branch_coverage=1 00:08:19.317 --rc genhtml_function_coverage=1 00:08:19.317 --rc genhtml_legend=1 00:08:19.317 --rc geninfo_all_blocks=1 00:08:19.317 --rc geninfo_unexecuted_blocks=1 00:08:19.317 00:08:19.317 ' 00:08:19.317 16:22:32 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:19.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.317 --rc genhtml_branch_coverage=1 00:08:19.317 --rc genhtml_function_coverage=1 00:08:19.317 --rc genhtml_legend=1 00:08:19.317 --rc geninfo_all_blocks=1 00:08:19.317 --rc geninfo_unexecuted_blocks=1 00:08:19.317 00:08:19.317 ' 00:08:19.317 16:22:32 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:19.317 16:22:32 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60048 00:08:19.317 16:22:32 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:19.317 16:22:32 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60048 00:08:19.317 16:22:32 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 60048 ']' 00:08:19.317 16:22:32 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.317 16:22:32 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:19.317 16:22:32 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.317 16:22:32 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:19.317 16:22:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:19.576 [2024-11-05 16:22:32.466567] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:08:19.577 [2024-11-05 16:22:32.466740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60048 ] 00:08:19.577 [2024-11-05 16:22:32.643889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.836 [2024-11-05 16:22:32.809635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.215 16:22:34 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:21.215 16:22:34 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:08:21.215 16:22:34 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:21.215 { 00:08:21.215 "version": "SPDK v25.01-pre git sha1 f2120392b", 00:08:21.215 "fields": { 00:08:21.215 "major": 25, 00:08:21.215 "minor": 1, 00:08:21.215 "patch": 0, 00:08:21.215 "suffix": "-pre", 00:08:21.215 "commit": "f2120392b" 00:08:21.215 } 00:08:21.215 } 00:08:21.215 16:22:34 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:21.215 16:22:34 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:21.215 16:22:34 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:21.215 16:22:34 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:21.215 16:22:34 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:21.215 16:22:34 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.215 16:22:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:21.215 16:22:34 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:21.215 16:22:34 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:21.215 16:22:34 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.475 16:22:34 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:21.475 16:22:34 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:21.475 16:22:34 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:21.475 16:22:34 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:08:21.475 16:22:34 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:21.475 16:22:34 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:21.476 16:22:34 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:21.476 16:22:34 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:21.476 16:22:34 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:21.476 16:22:34 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:21.476 16:22:34 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:21.476 16:22:34 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:21.476 16:22:34 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:21.476 16:22:34 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:21.735 request: 00:08:21.735 { 00:08:21.735 "method": "env_dpdk_get_mem_stats", 00:08:21.735 "req_id": 1 00:08:21.735 } 00:08:21.735 Got JSON-RPC error response 00:08:21.735 response: 00:08:21.735 { 00:08:21.735 "code": -32601, 00:08:21.735 "message": "Method not found" 00:08:21.735 } 00:08:21.735 16:22:34 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:08:21.735 16:22:34 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:21.735 16:22:34 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:21.735 16:22:34 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:21.735 16:22:34 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60048 00:08:21.735 16:22:34 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 60048 ']' 00:08:21.735 16:22:34 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 60048 00:08:21.735 16:22:34 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:08:21.735 16:22:34 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:21.735 16:22:34 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60048 00:08:21.735 killing process with pid 60048 00:08:21.735 16:22:34 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:21.735 16:22:34 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:21.735 16:22:34 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60048' 00:08:21.735 16:22:34 app_cmdline -- common/autotest_common.sh@971 -- # kill 60048 00:08:21.735 16:22:34 app_cmdline -- common/autotest_common.sh@976 -- # wait 60048 00:08:25.066 00:08:25.066 real 0m5.483s 00:08:25.066 user 0m5.641s 00:08:25.066 sys 0m0.890s 00:08:25.066 16:22:37 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:25.066 16:22:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:25.066 ************************************ 00:08:25.066 END TEST app_cmdline 00:08:25.066 ************************************ 00:08:25.066 16:22:37 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:25.066 16:22:37 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:25.066 16:22:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:25.066 16:22:37 -- common/autotest_common.sh@10 -- # set +x 00:08:25.066 ************************************ 00:08:25.066 START TEST version 00:08:25.066 ************************************ 00:08:25.066 16:22:37 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:25.066 * Looking for test storage... 00:08:25.066 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:25.066 16:22:37 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:25.066 16:22:37 version -- common/autotest_common.sh@1691 -- # lcov --version 00:08:25.066 16:22:37 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:25.066 16:22:37 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:25.066 16:22:37 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:25.066 16:22:37 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:25.066 16:22:37 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:25.066 16:22:37 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:25.066 16:22:37 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:25.066 16:22:37 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:25.066 16:22:37 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:25.066 16:22:37 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:25.066 16:22:37 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:25.066 16:22:37 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:25.066 16:22:37 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:25.066 16:22:37 version -- scripts/common.sh@344 -- # case "$op" in 00:08:25.066 16:22:37 version -- scripts/common.sh@345 -- # : 1 00:08:25.066 16:22:37 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:25.066 16:22:37 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:25.066 16:22:37 version -- scripts/common.sh@365 -- # decimal 1 00:08:25.066 16:22:37 version -- scripts/common.sh@353 -- # local d=1 00:08:25.066 16:22:37 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:25.066 16:22:37 version -- scripts/common.sh@355 -- # echo 1 00:08:25.066 16:22:37 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:25.066 16:22:37 version -- scripts/common.sh@366 -- # decimal 2 00:08:25.066 16:22:37 version -- scripts/common.sh@353 -- # local d=2 00:08:25.066 16:22:37 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:25.066 16:22:37 version -- scripts/common.sh@355 -- # echo 2 00:08:25.066 16:22:37 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:25.066 16:22:37 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:25.066 16:22:37 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:25.066 16:22:37 version -- scripts/common.sh@368 -- # return 0 00:08:25.066 16:22:37 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:25.066 16:22:37 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:25.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.066 --rc genhtml_branch_coverage=1 00:08:25.066 --rc genhtml_function_coverage=1 00:08:25.066 --rc genhtml_legend=1 00:08:25.066 --rc geninfo_all_blocks=1 00:08:25.066 --rc geninfo_unexecuted_blocks=1 00:08:25.066 00:08:25.066 ' 00:08:25.066 16:22:37 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:25.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.066 --rc genhtml_branch_coverage=1 00:08:25.066 --rc genhtml_function_coverage=1 00:08:25.066 --rc genhtml_legend=1 00:08:25.066 --rc geninfo_all_blocks=1 00:08:25.066 --rc geninfo_unexecuted_blocks=1 00:08:25.066 00:08:25.066 ' 00:08:25.066 16:22:37 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:25.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.066 --rc genhtml_branch_coverage=1 00:08:25.066 --rc genhtml_function_coverage=1 00:08:25.066 --rc genhtml_legend=1 00:08:25.066 --rc geninfo_all_blocks=1 00:08:25.066 --rc geninfo_unexecuted_blocks=1 00:08:25.066 00:08:25.066 ' 00:08:25.066 16:22:37 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:25.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.066 --rc genhtml_branch_coverage=1 00:08:25.066 --rc genhtml_function_coverage=1 00:08:25.066 --rc genhtml_legend=1 00:08:25.066 --rc geninfo_all_blocks=1 00:08:25.066 --rc geninfo_unexecuted_blocks=1 00:08:25.066 00:08:25.066 ' 00:08:25.066 16:22:37 version -- app/version.sh@17 -- # get_header_version major 00:08:25.066 16:22:37 version -- app/version.sh@14 -- # cut -f2 00:08:25.066 16:22:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:25.066 16:22:37 version -- app/version.sh@14 -- # tr -d '"' 00:08:25.066 16:22:37 version -- app/version.sh@17 -- # major=25 00:08:25.066 16:22:37 version -- app/version.sh@18 -- # get_header_version minor 00:08:25.067 16:22:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:25.067 16:22:37 version -- app/version.sh@14 -- # cut -f2 00:08:25.067 16:22:37 version -- app/version.sh@14 -- # tr -d '"' 00:08:25.067 16:22:37 version -- app/version.sh@18 -- # minor=1 00:08:25.067 16:22:37 version -- app/version.sh@19 -- # get_header_version patch 00:08:25.067 16:22:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:25.067 16:22:37 version -- app/version.sh@14 -- # cut -f2 00:08:25.067 16:22:37 version -- app/version.sh@14 -- # tr -d '"' 00:08:25.067 16:22:37 version -- app/version.sh@19 -- # patch=0 00:08:25.067 16:22:37 version -- app/version.sh@20 -- # get_header_version suffix 00:08:25.067 16:22:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:25.067 16:22:37 version -- app/version.sh@14 -- # cut -f2 00:08:25.067 16:22:37 version -- app/version.sh@14 -- # tr -d '"' 00:08:25.067 16:22:37 version -- app/version.sh@20 -- # suffix=-pre 00:08:25.067 16:22:37 version -- app/version.sh@22 -- # version=25.1 00:08:25.067 16:22:37 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:25.067 16:22:37 version -- app/version.sh@28 -- # version=25.1rc0 00:08:25.067 16:22:37 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:25.067 16:22:37 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:25.067 16:22:37 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:25.067 16:22:37 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:25.067 00:08:25.067 real 0m0.307s 00:08:25.067 user 0m0.181s 00:08:25.067 sys 0m0.182s 00:08:25.067 16:22:37 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:25.067 16:22:37 version -- common/autotest_common.sh@10 -- # set +x 00:08:25.067 ************************************ 00:08:25.067 END TEST version 00:08:25.067 ************************************ 00:08:25.067 16:22:37 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:25.067 16:22:37 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:08:25.067 16:22:37 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:08:25.067 16:22:37 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:25.067 16:22:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:25.067 16:22:37 -- common/autotest_common.sh@10 -- # set +x 00:08:25.067 ************************************ 00:08:25.067 START TEST bdev_raid 00:08:25.067 ************************************ 00:08:25.067 16:22:38 bdev_raid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:08:25.067 * Looking for test storage... 00:08:25.067 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:25.067 16:22:38 bdev_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:25.067 16:22:38 bdev_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:08:25.067 16:22:38 bdev_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:25.333 16:22:38 bdev_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:25.333 16:22:38 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:25.333 16:22:38 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:25.333 16:22:38 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:25.333 16:22:38 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:08:25.333 16:22:38 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:08:25.333 16:22:38 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:08:25.333 16:22:38 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:08:25.333 16:22:38 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:08:25.333 16:22:38 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:08:25.333 16:22:38 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:08:25.333 16:22:38 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:25.333 16:22:38 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:08:25.333 16:22:38 bdev_raid -- scripts/common.sh@345 -- # : 1 00:08:25.333 16:22:38 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:25.333 16:22:38 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:25.333 16:22:38 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:08:25.333 16:22:38 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:08:25.333 16:22:38 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:25.333 16:22:38 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:08:25.333 16:22:38 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:08:25.333 16:22:38 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:08:25.333 16:22:38 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:08:25.333 16:22:38 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:25.333 16:22:38 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:08:25.333 16:22:38 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:08:25.333 16:22:38 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:25.333 16:22:38 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:25.333 16:22:38 bdev_raid -- scripts/common.sh@368 -- # return 0 00:08:25.333 16:22:38 bdev_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:25.333 16:22:38 bdev_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:25.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.333 --rc genhtml_branch_coverage=1 00:08:25.333 --rc genhtml_function_coverage=1 00:08:25.333 --rc genhtml_legend=1 00:08:25.333 --rc geninfo_all_blocks=1 00:08:25.333 --rc geninfo_unexecuted_blocks=1 00:08:25.333 00:08:25.333 ' 00:08:25.333 16:22:38 bdev_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:25.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.333 --rc genhtml_branch_coverage=1 00:08:25.333 --rc genhtml_function_coverage=1 00:08:25.333 --rc genhtml_legend=1 00:08:25.333 --rc geninfo_all_blocks=1 00:08:25.333 --rc geninfo_unexecuted_blocks=1 00:08:25.333 00:08:25.333 ' 00:08:25.333 16:22:38 bdev_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:25.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.333 --rc genhtml_branch_coverage=1 00:08:25.334 --rc genhtml_function_coverage=1 00:08:25.334 --rc genhtml_legend=1 00:08:25.334 --rc geninfo_all_blocks=1 00:08:25.334 --rc geninfo_unexecuted_blocks=1 00:08:25.334 00:08:25.334 ' 00:08:25.334 16:22:38 bdev_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:25.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.334 --rc genhtml_branch_coverage=1 00:08:25.334 --rc genhtml_function_coverage=1 00:08:25.334 --rc genhtml_legend=1 00:08:25.334 --rc geninfo_all_blocks=1 00:08:25.334 --rc geninfo_unexecuted_blocks=1 00:08:25.334 00:08:25.334 ' 00:08:25.334 16:22:38 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:25.334 16:22:38 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:08:25.334 16:22:38 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:08:25.334 16:22:38 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:08:25.334 16:22:38 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:08:25.334 16:22:38 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:08:25.334 16:22:38 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:08:25.334 16:22:38 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:25.334 16:22:38 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:25.334 16:22:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:25.334 ************************************ 00:08:25.334 START TEST raid1_resize_data_offset_test 00:08:25.334 ************************************ 00:08:25.334 16:22:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1127 -- # raid_resize_data_offset_test 00:08:25.334 16:22:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60247 00:08:25.334 16:22:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:25.334 Process raid pid: 60247 00:08:25.334 16:22:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60247' 00:08:25.334 16:22:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60247 00:08:25.334 16:22:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@833 -- # '[' -z 60247 ']' 00:08:25.334 16:22:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.334 16:22:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:25.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.334 16:22:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.334 16:22:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:25.334 16:22:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.334 [2024-11-05 16:22:38.362165] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:08:25.334 [2024-11-05 16:22:38.362372] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.593 [2024-11-05 16:22:38.569036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.853 [2024-11-05 16:22:38.689894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.853 [2024-11-05 16:22:38.902071] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:25.853 [2024-11-05 16:22:38.902135] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:26.422 16:22:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:26.422 16:22:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@866 -- # return 0 00:08:26.422 16:22:39 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:08:26.422 16:22:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.422 16:22:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.422 malloc0 00:08:26.422 16:22:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.422 16:22:39 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:08:26.422 16:22:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.422 16:22:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.422 malloc1 00:08:26.422 16:22:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.422 16:22:39 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:08:26.422 16:22:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.422 16:22:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.422 null0 00:08:26.422 16:22:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.422 16:22:39 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:08:26.422 16:22:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.422 16:22:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.422 [2024-11-05 16:22:39.425663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:08:26.422 [2024-11-05 16:22:39.427828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:26.422 [2024-11-05 16:22:39.427888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:08:26.422 [2024-11-05 16:22:39.428096] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:26.422 [2024-11-05 16:22:39.428114] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:08:26.422 [2024-11-05 16:22:39.428449] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:26.422 [2024-11-05 16:22:39.428696] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:26.422 [2024-11-05 16:22:39.428713] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:26.422 [2024-11-05 16:22:39.428957] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:26.422 16:22:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.422 16:22:39 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:08:26.422 16:22:39 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.422 16:22:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.422 16:22:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.422 16:22:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.422 16:22:39 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:08:26.422 16:22:39 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:08:26.422 16:22:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.422 16:22:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.423 [2024-11-05 16:22:39.489498] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:08:26.423 16:22:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.423 16:22:39 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:08:26.423 16:22:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.423 16:22:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.996 malloc2 00:08:26.996 16:22:40 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.996 16:22:40 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:08:26.996 16:22:40 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.996 16:22:40 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.996 [2024-11-05 16:22:40.061160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:26.996 [2024-11-05 16:22:40.079007] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:26.996 16:22:40 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.996 [2024-11-05 16:22:40.081137] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:08:26.996 16:22:40 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.996 16:22:40 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:08:26.996 16:22:40 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.996 16:22:40 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.256 16:22:40 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.256 16:22:40 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:08:27.256 16:22:40 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60247 00:08:27.256 16:22:40 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@952 -- # '[' -z 60247 ']' 00:08:27.256 16:22:40 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # kill -0 60247 00:08:27.256 16:22:40 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # uname 00:08:27.256 16:22:40 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:27.256 16:22:40 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60247 00:08:27.256 killing process with pid 60247 00:08:27.256 16:22:40 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:27.256 16:22:40 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:27.256 16:22:40 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60247' 00:08:27.256 16:22:40 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@971 -- # kill 60247 00:08:27.256 [2024-11-05 16:22:40.186918] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:27.256 16:22:40 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@976 -- # wait 60247 00:08:27.256 [2024-11-05 16:22:40.188088] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:08:27.256 [2024-11-05 16:22:40.188152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:27.256 [2024-11-05 16:22:40.188169] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:08:27.256 [2024-11-05 16:22:40.226845] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:27.256 [2024-11-05 16:22:40.227202] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:27.256 [2024-11-05 16:22:40.227221] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:29.163 [2024-11-05 16:22:42.198490] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:30.540 ************************************ 00:08:30.540 END TEST raid1_resize_data_offset_test 00:08:30.540 ************************************ 00:08:30.540 16:22:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:08:30.540 00:08:30.540 real 0m5.158s 00:08:30.540 user 0m5.096s 00:08:30.540 sys 0m0.582s 00:08:30.540 16:22:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:30.540 16:22:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.540 16:22:43 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:08:30.540 16:22:43 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:30.540 16:22:43 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:30.540 16:22:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:30.540 ************************************ 00:08:30.540 START TEST raid0_resize_superblock_test 00:08:30.540 ************************************ 00:08:30.540 16:22:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1127 -- # raid_resize_superblock_test 0 00:08:30.540 16:22:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:08:30.540 Process raid pid: 60336 00:08:30.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.540 16:22:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60336 00:08:30.540 16:22:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60336' 00:08:30.540 16:22:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:30.540 16:22:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60336 00:08:30.540 16:22:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 60336 ']' 00:08:30.540 16:22:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.540 16:22:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:30.540 16:22:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.540 16:22:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:30.540 16:22:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.540 [2024-11-05 16:22:43.536052] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:08:30.540 [2024-11-05 16:22:43.536244] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.798 [2024-11-05 16:22:43.709448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.798 [2024-11-05 16:22:43.861394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.055 [2024-11-05 16:22:44.095294] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.055 [2024-11-05 16:22:44.095362] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.619 16:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:31.619 16:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:08:31.619 16:22:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:08:31.619 16:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.619 16:22:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.230 malloc0 00:08:32.230 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.230 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:32.230 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.230 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.230 [2024-11-05 16:22:45.092210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:32.230 [2024-11-05 16:22:45.092308] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.230 [2024-11-05 16:22:45.092344] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:32.230 [2024-11-05 16:22:45.092361] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.230 [2024-11-05 16:22:45.095253] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.230 [2024-11-05 16:22:45.095381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:32.230 pt0 00:08:32.230 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.230 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:08:32.230 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.230 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.230 53b8984e-2690-4583-bd54-884da35375f1 00:08:32.230 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.231 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:08:32.231 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.231 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.231 f1c70d79-4f97-4f00-b009-b379e5062565 00:08:32.231 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.231 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:08:32.231 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.231 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.231 db5f75af-f965-41cf-92d7-16f9d7171b97 00:08:32.231 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.231 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:08:32.231 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:08:32.231 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.231 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.231 [2024-11-05 16:22:45.221668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev f1c70d79-4f97-4f00-b009-b379e5062565 is claimed 00:08:32.231 [2024-11-05 16:22:45.221961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev db5f75af-f965-41cf-92d7-16f9d7171b97 is claimed 00:08:32.231 [2024-11-05 16:22:45.222207] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:32.231 [2024-11-05 16:22:45.222234] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:08:32.231 [2024-11-05 16:22:45.222708] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:32.231 [2024-11-05 16:22:45.223006] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:32.231 [2024-11-05 16:22:45.223024] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:32.231 [2024-11-05 16:22:45.223281] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:32.231 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.231 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:32.231 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:08:32.231 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.231 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.231 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.231 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:08:32.231 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:32.231 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:08:32.231 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.231 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.231 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.231 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:08:32.231 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:32.231 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:32.231 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.231 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.231 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:32.231 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:08:32.231 [2024-11-05 16:22:45.309798] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:32.231 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.488 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:32.488 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:32.488 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:08:32.488 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:08:32.488 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.488 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.488 [2024-11-05 16:22:45.349692] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:32.488 [2024-11-05 16:22:45.349733] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'f1c70d79-4f97-4f00-b009-b379e5062565' was resized: old size 131072, new size 204800 00:08:32.488 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.488 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:08:32.488 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.488 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.488 [2024-11-05 16:22:45.361651] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:32.488 [2024-11-05 16:22:45.361692] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'db5f75af-f965-41cf-92d7-16f9d7171b97' was resized: old size 131072, new size 204800 00:08:32.488 [2024-11-05 16:22:45.361737] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:08:32.488 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.488 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:32.488 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.488 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.488 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:08:32.488 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.488 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:08:32.488 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:32.488 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.488 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.488 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:08:32.488 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.488 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:08:32.488 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:32.488 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:32.488 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:32.488 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:08:32.488 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.488 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.488 [2024-11-05 16:22:45.445542] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:32.488 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.488 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:32.488 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:32.488 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:08:32.488 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:08:32.488 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.488 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.488 [2024-11-05 16:22:45.477197] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:08:32.488 [2024-11-05 16:22:45.477364] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:08:32.488 [2024-11-05 16:22:45.477387] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:32.488 [2024-11-05 16:22:45.477411] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:08:32.488 [2024-11-05 16:22:45.477659] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:32.488 [2024-11-05 16:22:45.477717] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:32.488 [2024-11-05 16:22:45.477733] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:32.488 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.488 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:32.488 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.489 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.489 [2024-11-05 16:22:45.485063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:32.489 [2024-11-05 16:22:45.485204] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.489 [2024-11-05 16:22:45.485268] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:32.489 [2024-11-05 16:22:45.485305] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.489 [2024-11-05 16:22:45.488172] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.489 [2024-11-05 16:22:45.488273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:32.489 pt0 00:08:32.489 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.489 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:32.489 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.489 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.489 [2024-11-05 16:22:45.490616] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev f1c70d79-4f97-4f00-b009-b379e5062565 00:08:32.489 [2024-11-05 16:22:45.490795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev f1c70d79-4f97-4f00-b009-b379e5062565 is claimed 00:08:32.489 [2024-11-05 16:22:45.491016] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev db5f75af-f965-41cf-92d7-16f9d7171b97 00:08:32.489 [2024-11-05 16:22:45.491106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev db5f75af-f965-41cf-92d7-16f9d7171b97 is claimed 00:08:32.489 [2024-11-05 16:22:45.491329] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev db5f75af-f965-41cf-92d7-16f9d7171b97 (2) smaller than existing raid bdev Raid (3) 00:08:32.489 [2024-11-05 16:22:45.491413] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev f1c70d79-4f97-4f00-b009-b379e5062565: File exists 00:08:32.489 [2024-11-05 16:22:45.491515] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:32.489 [2024-11-05 16:22:45.491572] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:08:32.489 [2024-11-05 16:22:45.491913] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:32.489 [2024-11-05 16:22:45.492153] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:32.489 [2024-11-05 16:22:45.492201] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:08:32.489 [2024-11-05 16:22:45.492465] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:32.489 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.489 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:32.489 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:32.489 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:32.489 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:08:32.489 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.489 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.489 [2024-11-05 16:22:45.505415] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:32.489 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.489 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:32.489 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:32.489 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:08:32.489 16:22:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60336 00:08:32.489 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 60336 ']' 00:08:32.489 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # kill -0 60336 00:08:32.489 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # uname 00:08:32.489 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:32.489 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60336 00:08:32.489 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:32.489 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:32.489 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60336' 00:08:32.489 killing process with pid 60336 00:08:32.489 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@971 -- # kill 60336 00:08:32.489 16:22:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@976 -- # wait 60336 00:08:32.489 [2024-11-05 16:22:45.561096] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:32.489 [2024-11-05 16:22:45.561278] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:32.489 [2024-11-05 16:22:45.561384] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:32.489 [2024-11-05 16:22:45.561447] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:08:34.388 [2024-11-05 16:22:47.047249] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:35.326 16:22:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:08:35.326 00:08:35.326 real 0m4.790s 00:08:35.326 user 0m4.988s 00:08:35.326 sys 0m0.558s 00:08:35.326 16:22:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:35.326 16:22:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.326 ************************************ 00:08:35.327 END TEST raid0_resize_superblock_test 00:08:35.327 ************************************ 00:08:35.327 16:22:48 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:08:35.327 16:22:48 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:35.327 16:22:48 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:35.327 16:22:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:35.327 ************************************ 00:08:35.327 START TEST raid1_resize_superblock_test 00:08:35.327 ************************************ 00:08:35.327 Process raid pid: 60440 00:08:35.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.327 16:22:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1127 -- # raid_resize_superblock_test 1 00:08:35.327 16:22:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:08:35.327 16:22:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60440 00:08:35.327 16:22:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60440' 00:08:35.327 16:22:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60440 00:08:35.327 16:22:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 60440 ']' 00:08:35.327 16:22:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.327 16:22:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:35.327 16:22:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:35.327 16:22:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.327 16:22:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:35.327 16:22:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.327 [2024-11-05 16:22:48.394228] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:08:35.327 [2024-11-05 16:22:48.394385] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:35.587 [2024-11-05 16:22:48.577769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.846 [2024-11-05 16:22:48.702450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.846 [2024-11-05 16:22:48.921218] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:35.846 [2024-11-05 16:22:48.921277] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:36.415 16:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:36.415 16:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:08:36.415 16:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:08:36.415 16:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.415 16:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.986 malloc0 00:08:36.986 16:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.986 16:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:36.986 16:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.986 16:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.986 [2024-11-05 16:22:49.854662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:36.986 [2024-11-05 16:22:49.854741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:36.986 [2024-11-05 16:22:49.854768] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:36.986 [2024-11-05 16:22:49.854782] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:36.986 [2024-11-05 16:22:49.857284] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:36.986 [2024-11-05 16:22:49.857331] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:36.986 pt0 00:08:36.986 16:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.986 16:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:08:36.986 16:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.986 16:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.986 f2a23f68-ab0c-4cb4-a7b9-5fe319decd2b 00:08:36.986 16:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.986 16:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:08:36.986 16:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.986 16:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.986 17e21cfb-6821-4657-a4e5-7a6184424a23 00:08:36.986 16:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.986 16:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:08:36.986 16:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.986 16:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.986 9d5df518-b2de-4c96-8b67-dd037ae8254e 00:08:36.986 16:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.986 16:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:08:36.986 16:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:08:36.986 16:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.986 16:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.986 [2024-11-05 16:22:49.990320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 17e21cfb-6821-4657-a4e5-7a6184424a23 is claimed 00:08:36.986 [2024-11-05 16:22:49.990413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9d5df518-b2de-4c96-8b67-dd037ae8254e is claimed 00:08:36.986 [2024-11-05 16:22:49.990569] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:36.986 [2024-11-05 16:22:49.990587] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:08:36.986 [2024-11-05 16:22:49.990908] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:36.986 [2024-11-05 16:22:49.991122] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:36.986 [2024-11-05 16:22:49.991140] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:36.986 [2024-11-05 16:22:49.991312] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:36.986 16:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.986 16:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:36.986 16:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.986 16:22:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:08:36.986 16:22:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.986 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.986 16:22:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:08:36.986 16:22:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:36.986 16:22:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:08:36.986 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.986 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.986 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.248 16:22:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:08:37.248 16:22:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:37.248 16:22:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:08:37.248 16:22:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:37.248 16:22:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:37.248 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.248 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.248 [2024-11-05 16:22:50.102467] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:37.248 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.248 16:22:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:37.248 16:22:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:37.248 16:22:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:08:37.248 16:22:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:08:37.248 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.248 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.248 [2024-11-05 16:22:50.150348] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:37.248 [2024-11-05 16:22:50.150434] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '17e21cfb-6821-4657-a4e5-7a6184424a23' was resized: old size 131072, new size 204800 00:08:37.248 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.248 16:22:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:08:37.248 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.248 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.248 [2024-11-05 16:22:50.162228] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:37.248 [2024-11-05 16:22:50.162257] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '9d5df518-b2de-4c96-8b67-dd037ae8254e' was resized: old size 131072, new size 204800 00:08:37.249 [2024-11-05 16:22:50.162286] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:08:37.249 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.249 16:22:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:37.249 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.249 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.249 16:22:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:08:37.249 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.249 16:22:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:08:37.249 16:22:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:37.249 16:22:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:08:37.249 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.249 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.249 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.249 16:22:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:08:37.249 16:22:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:37.249 16:22:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:37.249 16:22:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:37.249 16:22:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:08:37.249 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.249 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.249 [2024-11-05 16:22:50.274188] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:37.249 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.249 16:22:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:37.249 16:22:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:37.249 16:22:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:08:37.249 16:22:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:08:37.249 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.249 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.249 [2024-11-05 16:22:50.309878] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:08:37.249 [2024-11-05 16:22:50.309964] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:08:37.249 [2024-11-05 16:22:50.309995] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:08:37.249 [2024-11-05 16:22:50.310162] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:37.249 [2024-11-05 16:22:50.310384] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:37.249 [2024-11-05 16:22:50.310459] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:37.249 [2024-11-05 16:22:50.310480] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:37.249 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.249 16:22:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:37.249 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.249 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.249 [2024-11-05 16:22:50.321739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:37.249 [2024-11-05 16:22:50.321809] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:37.249 [2024-11-05 16:22:50.321833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:37.249 [2024-11-05 16:22:50.321847] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:37.249 [2024-11-05 16:22:50.324322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:37.249 [2024-11-05 16:22:50.324365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:37.249 [2024-11-05 16:22:50.326314] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 17e21cfb-6821-4657-a4e5-7a6184424a23 00:08:37.249 [2024-11-05 16:22:50.326400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 17e21cfb-6821-4657-a4e5-7a6184424a23 is claimed 00:08:37.249 [2024-11-05 16:22:50.326545] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 9d5df518-b2de-4c96-8b67-dd037ae8254e 00:08:37.249 [2024-11-05 16:22:50.326570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9d5df518-b2de-4c96-8b67-dd037ae8254e is claimed 00:08:37.249 [2024-11-05 16:22:50.326775] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 9d5df518-b2de-4cpt0 00:08:37.249 96-8b67-dd037ae8254e (2) smaller than existing raid bdev Raid (3) 00:08:37.249 [2024-11-05 16:22:50.326855] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 17e21cfb-6821-4657-a4e5-7a6184424a23: File exists 00:08:37.249 [2024-11-05 16:22:50.326901] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:37.249 [2024-11-05 16:22:50.326914] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:37.249 [2024-11-05 16:22:50.327185] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:37.249 [2024-11-05 16:22:50.327359] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:37.249 [2024-11-05 16:22:50.327368] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:08:37.249 [2024-11-05 16:22:50.327561] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:37.249 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.249 16:22:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:37.249 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.249 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.508 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.508 16:22:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:37.508 16:22:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:37.508 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.508 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.508 16:22:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:37.508 16:22:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:08:37.508 [2024-11-05 16:22:50.346668] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:37.508 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.508 16:22:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:37.508 16:22:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:37.508 16:22:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:08:37.508 16:22:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60440 00:08:37.508 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 60440 ']' 00:08:37.509 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # kill -0 60440 00:08:37.509 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # uname 00:08:37.509 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:37.509 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60440 00:08:37.509 killing process with pid 60440 00:08:37.509 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:37.509 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:37.509 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60440' 00:08:37.509 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@971 -- # kill 60440 00:08:37.509 [2024-11-05 16:22:50.420766] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:37.509 [2024-11-05 16:22:50.420872] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:37.509 16:22:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@976 -- # wait 60440 00:08:37.509 [2024-11-05 16:22:50.420939] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:37.509 [2024-11-05 16:22:50.420950] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:08:39.418 [2024-11-05 16:22:51.992513] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:40.355 16:22:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:08:40.355 00:08:40.355 real 0m4.862s 00:08:40.355 user 0m5.096s 00:08:40.355 sys 0m0.576s 00:08:40.355 ************************************ 00:08:40.355 END TEST raid1_resize_superblock_test 00:08:40.355 ************************************ 00:08:40.355 16:22:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:40.355 16:22:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.355 16:22:53 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:08:40.355 16:22:53 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:08:40.355 16:22:53 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:08:40.355 16:22:53 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:08:40.355 16:22:53 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:08:40.355 16:22:53 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:08:40.355 16:22:53 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:40.355 16:22:53 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:40.355 16:22:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:40.355 ************************************ 00:08:40.355 START TEST raid_function_test_raid0 00:08:40.355 ************************************ 00:08:40.355 16:22:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1127 -- # raid_function_test raid0 00:08:40.355 16:22:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:08:40.355 16:22:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:08:40.355 16:22:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:08:40.355 16:22:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60543 00:08:40.355 Process raid pid: 60543 00:08:40.355 16:22:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:40.355 16:22:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60543' 00:08:40.355 16:22:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60543 00:08:40.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.355 16:22:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@833 -- # '[' -z 60543 ']' 00:08:40.355 16:22:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.355 16:22:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:40.355 16:22:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.355 16:22:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:40.355 16:22:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:40.355 [2024-11-05 16:22:53.345664] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:08:40.355 [2024-11-05 16:22:53.345778] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.614 [2024-11-05 16:22:53.522257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.614 [2024-11-05 16:22:53.641937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.873 [2024-11-05 16:22:53.879855] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:40.873 [2024-11-05 16:22:53.879910] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:41.440 16:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:41.440 16:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@866 -- # return 0 00:08:41.440 16:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:08:41.440 16:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.440 16:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:41.440 Base_1 00:08:41.440 16:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.440 16:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:08:41.440 16:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.440 16:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:41.440 Base_2 00:08:41.440 16:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.440 16:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:08:41.440 16:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.440 16:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:41.440 [2024-11-05 16:22:54.339395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:41.440 [2024-11-05 16:22:54.341674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:41.440 [2024-11-05 16:22:54.341806] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:41.440 [2024-11-05 16:22:54.341856] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:41.440 [2024-11-05 16:22:54.342192] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:41.440 [2024-11-05 16:22:54.342410] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:41.440 [2024-11-05 16:22:54.342455] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:08:41.440 [2024-11-05 16:22:54.342697] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:41.440 16:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.440 16:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:08:41.440 16:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:41.440 16:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.440 16:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:41.440 16:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.440 16:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:08:41.440 16:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:08:41.440 16:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:08:41.440 16:22:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:08:41.440 16:22:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:08:41.440 16:22:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:41.440 16:22:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:08:41.440 16:22:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:41.440 16:22:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:08:41.440 16:22:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:41.440 16:22:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:41.440 16:22:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:08:41.699 [2024-11-05 16:22:54.650941] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:41.699 /dev/nbd0 00:08:41.699 16:22:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:41.699 16:22:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:41.699 16:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:08:41.699 16:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # local i 00:08:41.699 16:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:41.699 16:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:41.699 16:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:08:41.699 16:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # break 00:08:41.699 16:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:41.699 16:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:41.699 16:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:41.699 1+0 records in 00:08:41.699 1+0 records out 00:08:41.699 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000538185 s, 7.6 MB/s 00:08:41.699 16:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:41.699 16:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # size=4096 00:08:41.699 16:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:41.699 16:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:41.699 16:22:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # return 0 00:08:41.699 16:22:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:41.699 16:22:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:41.699 16:22:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:08:41.699 16:22:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:41.699 16:22:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:41.958 16:22:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:41.958 { 00:08:41.958 "nbd_device": "/dev/nbd0", 00:08:41.958 "bdev_name": "raid" 00:08:41.958 } 00:08:41.958 ]' 00:08:41.958 16:22:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:41.958 { 00:08:41.958 "nbd_device": "/dev/nbd0", 00:08:41.958 "bdev_name": "raid" 00:08:41.958 } 00:08:41.958 ]' 00:08:41.958 16:22:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:41.958 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:08:41.958 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:08:41.958 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:41.958 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:08:41.958 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:08:41.958 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:08:41.958 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:08:41.958 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:08:41.958 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:08:41.958 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:08:41.958 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:08:41.958 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:08:41.958 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:08:41.958 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:08:42.217 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:08:42.217 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:08:42.217 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:08:42.217 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:08:42.217 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:08:42.217 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:08:42.217 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:08:42.217 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:08:42.217 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:08:42.217 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:08:42.217 4096+0 records in 00:08:42.217 4096+0 records out 00:08:42.217 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0279263 s, 75.1 MB/s 00:08:42.217 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:08:42.476 4096+0 records in 00:08:42.476 4096+0 records out 00:08:42.476 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.231797 s, 9.0 MB/s 00:08:42.476 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:08:42.476 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:42.476 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:08:42.476 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:42.476 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:08:42.476 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:08:42.476 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:08:42.476 128+0 records in 00:08:42.476 128+0 records out 00:08:42.476 65536 bytes (66 kB, 64 KiB) copied, 0.00137254 s, 47.7 MB/s 00:08:42.476 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:08:42.476 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:42.476 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:42.476 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:42.476 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:42.476 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:08:42.476 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:08:42.476 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:08:42.476 2035+0 records in 00:08:42.476 2035+0 records out 00:08:42.476 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0136054 s, 76.6 MB/s 00:08:42.476 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:08:42.476 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:42.476 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:42.476 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:42.476 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:42.476 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:08:42.476 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:08:42.476 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:08:42.476 456+0 records in 00:08:42.476 456+0 records out 00:08:42.476 233472 bytes (233 kB, 228 KiB) copied, 0.00391235 s, 59.7 MB/s 00:08:42.476 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:08:42.476 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:42.476 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:42.476 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:42.476 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:42.476 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:08:42.476 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:08:42.476 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:08:42.476 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:42.476 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:42.476 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:08:42.476 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:42.476 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:08:42.736 [2024-11-05 16:22:55.716851] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:42.736 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:42.736 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:42.736 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:42.736 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:42.736 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:42.736 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:42.736 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:08:42.736 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:08:42.736 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:08:42.736 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:42.736 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:42.995 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:42.995 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:42.995 16:22:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:42.995 16:22:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:42.995 16:22:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:42.995 16:22:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:42.995 16:22:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:08:42.995 16:22:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:08:42.995 16:22:56 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:42.995 16:22:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:08:42.995 16:22:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:08:42.995 16:22:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60543 00:08:42.995 16:22:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # '[' -z 60543 ']' 00:08:42.995 16:22:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # kill -0 60543 00:08:42.995 16:22:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # uname 00:08:42.995 16:22:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:42.995 16:22:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60543 00:08:43.302 killing process with pid 60543 00:08:43.302 16:22:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:43.302 16:22:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:43.302 16:22:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60543' 00:08:43.302 16:22:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@971 -- # kill 60543 00:08:43.302 [2024-11-05 16:22:56.095985] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:43.302 [2024-11-05 16:22:56.096088] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:43.302 [2024-11-05 16:22:56.096137] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:43.302 16:22:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@976 -- # wait 60543 00:08:43.302 [2024-11-05 16:22:56.096151] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:08:43.302 [2024-11-05 16:22:56.314723] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:44.680 16:22:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:08:44.680 00:08:44.680 real 0m4.224s 00:08:44.680 user 0m5.040s 00:08:44.680 sys 0m1.023s 00:08:44.680 16:22:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:44.680 16:22:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:44.680 ************************************ 00:08:44.680 END TEST raid_function_test_raid0 00:08:44.680 ************************************ 00:08:44.680 16:22:57 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:08:44.680 16:22:57 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:44.680 16:22:57 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:44.680 16:22:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:44.680 ************************************ 00:08:44.680 START TEST raid_function_test_concat 00:08:44.680 ************************************ 00:08:44.680 16:22:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1127 -- # raid_function_test concat 00:08:44.680 16:22:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:08:44.680 16:22:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:08:44.680 16:22:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:08:44.680 Process raid pid: 60673 00:08:44.680 16:22:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60673 00:08:44.680 16:22:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:44.680 16:22:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60673' 00:08:44.680 16:22:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60673 00:08:44.680 16:22:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@833 -- # '[' -z 60673 ']' 00:08:44.680 16:22:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.680 16:22:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:44.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.680 16:22:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.680 16:22:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:44.680 16:22:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:44.680 [2024-11-05 16:22:57.626967] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:08:44.680 [2024-11-05 16:22:57.627104] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.940 [2024-11-05 16:22:57.807548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.940 [2024-11-05 16:22:57.928637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.199 [2024-11-05 16:22:58.138187] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:45.199 [2024-11-05 16:22:58.138227] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:45.458 16:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:45.458 16:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@866 -- # return 0 00:08:45.458 16:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:08:45.458 16:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.458 16:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:45.458 Base_1 00:08:45.458 16:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.458 16:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:08:45.458 16:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.458 16:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:45.718 Base_2 00:08:45.718 16:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.718 16:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:08:45.718 16:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.718 16:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:45.718 [2024-11-05 16:22:58.591049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:45.718 [2024-11-05 16:22:58.592896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:45.718 [2024-11-05 16:22:58.593041] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:45.718 [2024-11-05 16:22:58.593060] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:45.718 [2024-11-05 16:22:58.593334] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:45.718 [2024-11-05 16:22:58.593493] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:45.718 [2024-11-05 16:22:58.593502] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:08:45.718 [2024-11-05 16:22:58.593680] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:45.718 16:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.718 16:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:45.718 16:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:08:45.718 16:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.718 16:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:45.718 16:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.718 16:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:08:45.718 16:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:08:45.718 16:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:08:45.718 16:22:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:08:45.718 16:22:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:08:45.718 16:22:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:45.718 16:22:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:08:45.718 16:22:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:45.718 16:22:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:08:45.718 16:22:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:45.718 16:22:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:45.718 16:22:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:08:45.978 [2024-11-05 16:22:58.842704] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:45.978 /dev/nbd0 00:08:45.978 16:22:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:45.978 16:22:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:45.978 16:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:08:45.978 16:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # local i 00:08:45.978 16:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:45.978 16:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:45.978 16:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:08:45.978 16:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # break 00:08:45.978 16:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:45.978 16:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:45.978 16:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:45.978 1+0 records in 00:08:45.978 1+0 records out 00:08:45.978 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000545047 s, 7.5 MB/s 00:08:45.978 16:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.978 16:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # size=4096 00:08:45.978 16:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.978 16:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:45.978 16:22:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # return 0 00:08:45.978 16:22:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:45.978 16:22:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:45.978 16:22:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:08:45.978 16:22:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:45.978 16:22:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:46.237 16:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:46.237 { 00:08:46.237 "nbd_device": "/dev/nbd0", 00:08:46.237 "bdev_name": "raid" 00:08:46.237 } 00:08:46.237 ]' 00:08:46.237 16:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:46.237 { 00:08:46.237 "nbd_device": "/dev/nbd0", 00:08:46.237 "bdev_name": "raid" 00:08:46.237 } 00:08:46.237 ]' 00:08:46.238 16:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:46.238 16:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:08:46.238 16:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:46.238 16:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:08:46.238 16:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:08:46.238 16:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:08:46.238 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:08:46.238 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:08:46.238 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:08:46.238 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:08:46.238 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:08:46.238 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:08:46.238 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:08:46.238 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:08:46.238 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:08:46.238 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:08:46.238 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:08:46.238 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:08:46.238 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:08:46.238 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:08:46.238 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:08:46.238 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:08:46.238 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:08:46.238 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:08:46.238 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:08:46.238 4096+0 records in 00:08:46.238 4096+0 records out 00:08:46.238 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0356574 s, 58.8 MB/s 00:08:46.238 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:08:46.497 4096+0 records in 00:08:46.497 4096+0 records out 00:08:46.497 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.216714 s, 9.7 MB/s 00:08:46.497 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:08:46.497 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:46.497 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:08:46.497 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:46.497 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:08:46.497 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:08:46.497 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:08:46.497 128+0 records in 00:08:46.497 128+0 records out 00:08:46.497 65536 bytes (66 kB, 64 KiB) copied, 0.00116029 s, 56.5 MB/s 00:08:46.497 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:08:46.498 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:46.498 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:46.498 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:46.498 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:46.498 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:08:46.498 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:08:46.498 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:08:46.498 2035+0 records in 00:08:46.498 2035+0 records out 00:08:46.498 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0143905 s, 72.4 MB/s 00:08:46.498 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:08:46.498 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:46.498 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:46.498 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:46.498 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:46.498 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:08:46.498 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:08:46.498 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:08:46.498 456+0 records in 00:08:46.498 456+0 records out 00:08:46.498 233472 bytes (233 kB, 228 KiB) copied, 0.00419693 s, 55.6 MB/s 00:08:46.498 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:08:46.498 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:46.498 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:46.770 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:46.770 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:46.770 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:08:46.770 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:08:46.770 16:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:08:46.770 16:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:46.770 16:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:46.770 16:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:08:46.770 16:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:46.770 16:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:08:46.770 16:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:46.770 [2024-11-05 16:22:59.817016] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:46.770 16:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:46.770 16:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:46.770 16:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:46.770 16:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:46.770 16:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:46.770 16:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:08:46.770 16:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:08:46.770 16:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:08:46.770 16:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:46.770 16:22:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:47.028 16:23:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:47.028 16:23:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:47.028 16:23:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:47.028 16:23:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:47.028 16:23:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:47.029 16:23:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:47.287 16:23:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:08:47.287 16:23:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:08:47.287 16:23:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:47.287 16:23:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:08:47.287 16:23:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:08:47.287 16:23:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60673 00:08:47.287 16:23:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # '[' -z 60673 ']' 00:08:47.287 16:23:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # kill -0 60673 00:08:47.287 16:23:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # uname 00:08:47.287 16:23:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:47.287 16:23:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60673 00:08:47.287 killing process with pid 60673 00:08:47.287 16:23:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:47.287 16:23:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:47.287 16:23:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60673' 00:08:47.287 16:23:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@971 -- # kill 60673 00:08:47.288 16:23:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@976 -- # wait 60673 00:08:47.288 [2024-11-05 16:23:00.163899] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:47.288 [2024-11-05 16:23:00.164007] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:47.288 [2024-11-05 16:23:00.164066] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:47.288 [2024-11-05 16:23:00.164078] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:08:47.547 [2024-11-05 16:23:00.392202] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:48.925 16:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:08:48.925 00:08:48.925 real 0m4.056s 00:08:48.925 user 0m4.742s 00:08:48.925 sys 0m0.987s 00:08:48.925 ************************************ 00:08:48.925 END TEST raid_function_test_concat 00:08:48.925 ************************************ 00:08:48.925 16:23:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:48.925 16:23:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:48.925 16:23:01 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:08:48.925 16:23:01 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:48.925 16:23:01 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:48.925 16:23:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:48.925 ************************************ 00:08:48.925 START TEST raid0_resize_test 00:08:48.925 ************************************ 00:08:48.925 16:23:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1127 -- # raid_resize_test 0 00:08:48.925 16:23:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:08:48.925 16:23:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:08:48.925 16:23:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:08:48.925 16:23:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:08:48.925 16:23:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:08:48.925 16:23:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:08:48.925 16:23:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:08:48.925 16:23:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:08:48.925 16:23:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60802 00:08:48.925 16:23:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60802' 00:08:48.925 Process raid pid: 60802 00:08:48.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.925 16:23:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60802 00:08:48.925 16:23:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@833 -- # '[' -z 60802 ']' 00:08:48.925 16:23:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.925 16:23:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:48.925 16:23:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:48.925 16:23:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.925 16:23:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:48.926 16:23:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.926 [2024-11-05 16:23:01.747590] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:08:48.926 [2024-11-05 16:23:01.747823] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.926 [2024-11-05 16:23:01.925501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.184 [2024-11-05 16:23:02.052881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.442 [2024-11-05 16:23:02.293828] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:49.442 [2024-11-05 16:23:02.293957] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@866 -- # return 0 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.702 Base_1 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.702 Base_2 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.702 [2024-11-05 16:23:02.638031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:49.702 [2024-11-05 16:23:02.640004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:49.702 [2024-11-05 16:23:02.640057] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:49.702 [2024-11-05 16:23:02.640068] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:49.702 [2024-11-05 16:23:02.640310] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:49.702 [2024-11-05 16:23:02.640438] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:49.702 [2024-11-05 16:23:02.640448] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:49.702 [2024-11-05 16:23:02.640702] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.702 [2024-11-05 16:23:02.649981] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:49.702 [2024-11-05 16:23:02.650049] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:08:49.702 true 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.702 [2024-11-05 16:23:02.666185] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.702 [2024-11-05 16:23:02.713938] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:49.702 [2024-11-05 16:23:02.714051] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:08:49.702 [2024-11-05 16:23:02.714141] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:08:49.702 true 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:08:49.702 [2024-11-05 16:23:02.726139] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60802 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # '[' -z 60802 ']' 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # kill -0 60802 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # uname 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:49.702 16:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60802 00:08:49.961 16:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:49.961 16:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:49.961 16:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60802' 00:08:49.961 killing process with pid 60802 00:08:49.961 16:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@971 -- # kill 60802 00:08:49.961 [2024-11-05 16:23:02.816134] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:49.961 16:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@976 -- # wait 60802 00:08:49.961 [2024-11-05 16:23:02.816335] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:49.961 [2024-11-05 16:23:02.816393] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:49.961 [2024-11-05 16:23:02.816403] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:49.961 [2024-11-05 16:23:02.836709] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:51.339 16:23:04 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:08:51.339 00:08:51.339 real 0m2.389s 00:08:51.339 user 0m2.545s 00:08:51.339 sys 0m0.356s 00:08:51.339 16:23:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:51.339 16:23:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.339 ************************************ 00:08:51.339 END TEST raid0_resize_test 00:08:51.339 ************************************ 00:08:51.339 16:23:04 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:08:51.339 16:23:04 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:51.339 16:23:04 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:51.339 16:23:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:51.339 ************************************ 00:08:51.339 START TEST raid1_resize_test 00:08:51.339 ************************************ 00:08:51.339 16:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1127 -- # raid_resize_test 1 00:08:51.339 16:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:08:51.339 16:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:08:51.339 16:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:08:51.339 16:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:08:51.339 16:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:08:51.339 16:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:08:51.339 16:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:08:51.339 16:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:08:51.339 16:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60863 00:08:51.339 Process raid pid: 60863 00:08:51.339 16:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:51.339 16:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60863' 00:08:51.339 16:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60863 00:08:51.339 16:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@833 -- # '[' -z 60863 ']' 00:08:51.339 16:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.339 16:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:51.339 16:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.339 16:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:51.339 16:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.339 [2024-11-05 16:23:04.197239] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:08:51.339 [2024-11-05 16:23:04.197469] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.339 [2024-11-05 16:23:04.364622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.599 [2024-11-05 16:23:04.501080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.857 [2024-11-05 16:23:04.733144] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:51.857 [2024-11-05 16:23:04.733194] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.117 16:23:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:52.117 16:23:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@866 -- # return 0 00:08:52.117 16:23:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:08:52.117 16:23:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.117 16:23:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.117 Base_1 00:08:52.117 16:23:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.117 16:23:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:08:52.117 16:23:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.117 16:23:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.117 Base_2 00:08:52.117 16:23:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.117 16:23:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:08:52.117 16:23:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:08:52.117 16:23:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.117 16:23:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.117 [2024-11-05 16:23:05.129650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:52.117 [2024-11-05 16:23:05.131688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:52.117 [2024-11-05 16:23:05.131760] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:52.117 [2024-11-05 16:23:05.131773] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:52.117 [2024-11-05 16:23:05.132067] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:52.117 [2024-11-05 16:23:05.132212] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:52.117 [2024-11-05 16:23:05.132223] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:52.117 [2024-11-05 16:23:05.132421] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:52.117 16:23:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.117 16:23:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:08:52.117 16:23:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.117 16:23:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.117 [2024-11-05 16:23:05.141658] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:52.117 [2024-11-05 16:23:05.141703] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:08:52.117 true 00:08:52.117 16:23:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.117 16:23:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:08:52.117 16:23:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:52.117 16:23:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.117 16:23:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.117 [2024-11-05 16:23:05.157839] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:52.117 16:23:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.117 16:23:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:08:52.117 16:23:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:08:52.117 16:23:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:08:52.117 16:23:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:08:52.117 16:23:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:08:52.117 16:23:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:08:52.117 16:23:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.117 16:23:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.376 [2024-11-05 16:23:05.209571] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:52.376 [2024-11-05 16:23:05.209614] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:08:52.376 [2024-11-05 16:23:05.209652] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:08:52.376 true 00:08:52.376 16:23:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.376 16:23:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:08:52.376 16:23:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:52.376 16:23:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.376 16:23:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.376 [2024-11-05 16:23:05.225726] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:52.376 16:23:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.376 16:23:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:08:52.376 16:23:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:08:52.376 16:23:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:08:52.376 16:23:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:08:52.376 16:23:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:08:52.376 16:23:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60863 00:08:52.376 16:23:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@952 -- # '[' -z 60863 ']' 00:08:52.376 16:23:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # kill -0 60863 00:08:52.376 16:23:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # uname 00:08:52.376 16:23:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:52.376 16:23:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60863 00:08:52.376 killing process with pid 60863 00:08:52.376 16:23:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:52.376 16:23:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:52.376 16:23:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60863' 00:08:52.376 16:23:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@971 -- # kill 60863 00:08:52.376 [2024-11-05 16:23:05.308888] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:52.376 [2024-11-05 16:23:05.309005] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:52.376 16:23:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@976 -- # wait 60863 00:08:52.376 [2024-11-05 16:23:05.309646] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:52.376 [2024-11-05 16:23:05.309678] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:52.376 [2024-11-05 16:23:05.329060] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:53.753 16:23:06 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:08:53.753 00:08:53.753 real 0m2.420s 00:08:53.753 user 0m2.613s 00:08:53.753 sys 0m0.350s 00:08:53.753 16:23:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:53.753 16:23:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.753 ************************************ 00:08:53.753 END TEST raid1_resize_test 00:08:53.753 ************************************ 00:08:53.753 16:23:06 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:53.753 16:23:06 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:53.753 16:23:06 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:08:53.753 16:23:06 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:53.753 16:23:06 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:53.753 16:23:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:53.753 ************************************ 00:08:53.753 START TEST raid_state_function_test 00:08:53.753 ************************************ 00:08:53.753 16:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 2 false 00:08:53.753 16:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:53.753 16:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:53.753 16:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:53.753 16:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:53.753 16:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:53.753 16:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:53.753 16:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:53.753 16:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:53.753 16:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:53.753 16:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:53.753 16:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:53.753 16:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:53.753 16:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:53.753 16:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:53.753 16:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:53.753 16:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:53.753 16:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:53.753 16:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:53.753 16:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:53.753 16:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:53.753 16:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:53.753 16:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:53.753 16:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:53.753 Process raid pid: 60926 00:08:53.753 16:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60926 00:08:53.753 16:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:53.753 16:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60926' 00:08:53.753 16:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60926 00:08:53.753 16:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 60926 ']' 00:08:53.753 16:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.753 16:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:53.753 16:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.753 16:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:53.753 16:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.753 [2024-11-05 16:23:06.698921] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:08:53.753 [2024-11-05 16:23:06.699153] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.012 [2024-11-05 16:23:06.875881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.012 [2024-11-05 16:23:06.999915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.291 [2024-11-05 16:23:07.208322] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.291 [2024-11-05 16:23:07.208462] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.550 16:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:54.550 16:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:08:54.550 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:54.550 16:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.550 16:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.550 [2024-11-05 16:23:07.549883] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:54.550 [2024-11-05 16:23:07.550056] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:54.550 [2024-11-05 16:23:07.550101] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:54.550 [2024-11-05 16:23:07.550134] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:54.550 16:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.550 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:54.550 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.550 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.550 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:54.550 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.550 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:54.550 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.550 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.550 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.550 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.550 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.550 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.550 16:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.550 16:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.550 16:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.550 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.550 "name": "Existed_Raid", 00:08:54.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.550 "strip_size_kb": 64, 00:08:54.550 "state": "configuring", 00:08:54.550 "raid_level": "raid0", 00:08:54.550 "superblock": false, 00:08:54.550 "num_base_bdevs": 2, 00:08:54.550 "num_base_bdevs_discovered": 0, 00:08:54.550 "num_base_bdevs_operational": 2, 00:08:54.550 "base_bdevs_list": [ 00:08:54.550 { 00:08:54.550 "name": "BaseBdev1", 00:08:54.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.550 "is_configured": false, 00:08:54.550 "data_offset": 0, 00:08:54.550 "data_size": 0 00:08:54.550 }, 00:08:54.550 { 00:08:54.550 "name": "BaseBdev2", 00:08:54.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.550 "is_configured": false, 00:08:54.550 "data_offset": 0, 00:08:54.550 "data_size": 0 00:08:54.550 } 00:08:54.550 ] 00:08:54.550 }' 00:08:54.550 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.550 16:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.118 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:55.118 16:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.119 16:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.119 [2024-11-05 16:23:07.961115] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:55.119 [2024-11-05 16:23:07.961157] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:55.119 16:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.119 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:55.119 16:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.119 16:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.119 [2024-11-05 16:23:07.973093] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:55.119 [2024-11-05 16:23:07.973144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:55.119 [2024-11-05 16:23:07.973160] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:55.119 [2024-11-05 16:23:07.973177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:55.119 16:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.119 16:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:55.119 16:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.119 16:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.119 [2024-11-05 16:23:08.023173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:55.119 BaseBdev1 00:08:55.119 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.119 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:55.119 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:55.119 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:55.119 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:55.119 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:55.119 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:55.119 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:55.119 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.119 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.119 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.119 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:55.119 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.119 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.119 [ 00:08:55.119 { 00:08:55.119 "name": "BaseBdev1", 00:08:55.119 "aliases": [ 00:08:55.119 "7ce64347-7442-43ab-910e-9342401e690f" 00:08:55.119 ], 00:08:55.119 "product_name": "Malloc disk", 00:08:55.119 "block_size": 512, 00:08:55.119 "num_blocks": 65536, 00:08:55.119 "uuid": "7ce64347-7442-43ab-910e-9342401e690f", 00:08:55.119 "assigned_rate_limits": { 00:08:55.119 "rw_ios_per_sec": 0, 00:08:55.119 "rw_mbytes_per_sec": 0, 00:08:55.119 "r_mbytes_per_sec": 0, 00:08:55.119 "w_mbytes_per_sec": 0 00:08:55.119 }, 00:08:55.119 "claimed": true, 00:08:55.119 "claim_type": "exclusive_write", 00:08:55.119 "zoned": false, 00:08:55.119 "supported_io_types": { 00:08:55.119 "read": true, 00:08:55.119 "write": true, 00:08:55.119 "unmap": true, 00:08:55.119 "flush": true, 00:08:55.119 "reset": true, 00:08:55.119 "nvme_admin": false, 00:08:55.119 "nvme_io": false, 00:08:55.119 "nvme_io_md": false, 00:08:55.119 "write_zeroes": true, 00:08:55.119 "zcopy": true, 00:08:55.119 "get_zone_info": false, 00:08:55.119 "zone_management": false, 00:08:55.119 "zone_append": false, 00:08:55.119 "compare": false, 00:08:55.119 "compare_and_write": false, 00:08:55.119 "abort": true, 00:08:55.119 "seek_hole": false, 00:08:55.119 "seek_data": false, 00:08:55.119 "copy": true, 00:08:55.119 "nvme_iov_md": false 00:08:55.119 }, 00:08:55.119 "memory_domains": [ 00:08:55.119 { 00:08:55.119 "dma_device_id": "system", 00:08:55.119 "dma_device_type": 1 00:08:55.119 }, 00:08:55.119 { 00:08:55.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.119 "dma_device_type": 2 00:08:55.119 } 00:08:55.119 ], 00:08:55.119 "driver_specific": {} 00:08:55.119 } 00:08:55.119 ] 00:08:55.119 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.119 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:55.119 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:55.119 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.119 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.119 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.119 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.119 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:55.119 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.119 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.119 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.119 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.119 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.119 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.119 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.119 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.119 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.119 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.119 "name": "Existed_Raid", 00:08:55.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.119 "strip_size_kb": 64, 00:08:55.119 "state": "configuring", 00:08:55.119 "raid_level": "raid0", 00:08:55.119 "superblock": false, 00:08:55.119 "num_base_bdevs": 2, 00:08:55.119 "num_base_bdevs_discovered": 1, 00:08:55.119 "num_base_bdevs_operational": 2, 00:08:55.119 "base_bdevs_list": [ 00:08:55.119 { 00:08:55.119 "name": "BaseBdev1", 00:08:55.119 "uuid": "7ce64347-7442-43ab-910e-9342401e690f", 00:08:55.119 "is_configured": true, 00:08:55.119 "data_offset": 0, 00:08:55.119 "data_size": 65536 00:08:55.119 }, 00:08:55.119 { 00:08:55.119 "name": "BaseBdev2", 00:08:55.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.119 "is_configured": false, 00:08:55.119 "data_offset": 0, 00:08:55.119 "data_size": 0 00:08:55.119 } 00:08:55.119 ] 00:08:55.119 }' 00:08:55.119 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.119 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.686 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:55.686 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.686 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.686 [2024-11-05 16:23:08.510399] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:55.686 [2024-11-05 16:23:08.510463] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:55.686 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.686 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:55.686 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.686 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.686 [2024-11-05 16:23:08.522456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:55.686 [2024-11-05 16:23:08.524734] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:55.686 [2024-11-05 16:23:08.524787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:55.686 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.686 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:55.686 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:55.686 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:55.686 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.686 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.686 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.686 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.686 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:55.686 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.686 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.686 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.686 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.686 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.686 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.686 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.686 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.686 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.686 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.686 "name": "Existed_Raid", 00:08:55.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.686 "strip_size_kb": 64, 00:08:55.686 "state": "configuring", 00:08:55.686 "raid_level": "raid0", 00:08:55.686 "superblock": false, 00:08:55.686 "num_base_bdevs": 2, 00:08:55.686 "num_base_bdevs_discovered": 1, 00:08:55.686 "num_base_bdevs_operational": 2, 00:08:55.686 "base_bdevs_list": [ 00:08:55.686 { 00:08:55.686 "name": "BaseBdev1", 00:08:55.686 "uuid": "7ce64347-7442-43ab-910e-9342401e690f", 00:08:55.686 "is_configured": true, 00:08:55.686 "data_offset": 0, 00:08:55.686 "data_size": 65536 00:08:55.686 }, 00:08:55.686 { 00:08:55.686 "name": "BaseBdev2", 00:08:55.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.686 "is_configured": false, 00:08:55.686 "data_offset": 0, 00:08:55.686 "data_size": 0 00:08:55.686 } 00:08:55.686 ] 00:08:55.686 }' 00:08:55.686 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.687 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.945 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:55.945 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.945 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.945 [2024-11-05 16:23:08.999038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:55.945 [2024-11-05 16:23:08.999099] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:55.945 [2024-11-05 16:23:08.999110] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:55.945 [2024-11-05 16:23:08.999384] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:55.945 [2024-11-05 16:23:08.999588] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:55.946 [2024-11-05 16:23:08.999609] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:55.946 [2024-11-05 16:23:08.999890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.946 BaseBdev2 00:08:55.946 16:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.946 16:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:55.946 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:55.946 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:55.946 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:55.946 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:55.946 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:55.946 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:55.946 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.946 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.946 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.946 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:55.946 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.946 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.946 [ 00:08:55.946 { 00:08:55.946 "name": "BaseBdev2", 00:08:55.946 "aliases": [ 00:08:55.946 "3fc89320-2299-45f3-827a-6359901ea2cc" 00:08:55.946 ], 00:08:55.946 "product_name": "Malloc disk", 00:08:55.946 "block_size": 512, 00:08:55.946 "num_blocks": 65536, 00:08:55.946 "uuid": "3fc89320-2299-45f3-827a-6359901ea2cc", 00:08:55.946 "assigned_rate_limits": { 00:08:55.946 "rw_ios_per_sec": 0, 00:08:55.946 "rw_mbytes_per_sec": 0, 00:08:55.946 "r_mbytes_per_sec": 0, 00:08:55.946 "w_mbytes_per_sec": 0 00:08:55.946 }, 00:08:55.946 "claimed": true, 00:08:55.946 "claim_type": "exclusive_write", 00:08:55.946 "zoned": false, 00:08:55.946 "supported_io_types": { 00:08:55.946 "read": true, 00:08:55.946 "write": true, 00:08:55.946 "unmap": true, 00:08:55.946 "flush": true, 00:08:55.946 "reset": true, 00:08:55.946 "nvme_admin": false, 00:08:55.946 "nvme_io": false, 00:08:55.946 "nvme_io_md": false, 00:08:55.946 "write_zeroes": true, 00:08:55.946 "zcopy": true, 00:08:55.946 "get_zone_info": false, 00:08:55.946 "zone_management": false, 00:08:55.946 "zone_append": false, 00:08:55.946 "compare": false, 00:08:55.946 "compare_and_write": false, 00:08:55.946 "abort": true, 00:08:55.946 "seek_hole": false, 00:08:55.946 "seek_data": false, 00:08:55.946 "copy": true, 00:08:55.946 "nvme_iov_md": false 00:08:55.946 }, 00:08:55.946 "memory_domains": [ 00:08:56.205 { 00:08:56.205 "dma_device_id": "system", 00:08:56.205 "dma_device_type": 1 00:08:56.205 }, 00:08:56.205 { 00:08:56.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.205 "dma_device_type": 2 00:08:56.205 } 00:08:56.205 ], 00:08:56.205 "driver_specific": {} 00:08:56.205 } 00:08:56.205 ] 00:08:56.205 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.205 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:56.205 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:56.205 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:56.205 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:56.205 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.205 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:56.205 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:56.205 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.205 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:56.205 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.205 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.205 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.205 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.205 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.205 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.205 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.205 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.205 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.205 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.205 "name": "Existed_Raid", 00:08:56.205 "uuid": "ed0bc556-5f68-4610-b238-1a0cf82bc59c", 00:08:56.205 "strip_size_kb": 64, 00:08:56.205 "state": "online", 00:08:56.205 "raid_level": "raid0", 00:08:56.205 "superblock": false, 00:08:56.205 "num_base_bdevs": 2, 00:08:56.205 "num_base_bdevs_discovered": 2, 00:08:56.205 "num_base_bdevs_operational": 2, 00:08:56.205 "base_bdevs_list": [ 00:08:56.205 { 00:08:56.206 "name": "BaseBdev1", 00:08:56.206 "uuid": "7ce64347-7442-43ab-910e-9342401e690f", 00:08:56.206 "is_configured": true, 00:08:56.206 "data_offset": 0, 00:08:56.206 "data_size": 65536 00:08:56.206 }, 00:08:56.206 { 00:08:56.206 "name": "BaseBdev2", 00:08:56.206 "uuid": "3fc89320-2299-45f3-827a-6359901ea2cc", 00:08:56.206 "is_configured": true, 00:08:56.206 "data_offset": 0, 00:08:56.206 "data_size": 65536 00:08:56.206 } 00:08:56.206 ] 00:08:56.206 }' 00:08:56.206 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.206 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.465 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:56.465 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:56.465 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:56.465 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:56.465 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:56.465 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:56.465 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:56.465 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:56.465 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.465 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.465 [2024-11-05 16:23:09.486534] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:56.465 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.465 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:56.465 "name": "Existed_Raid", 00:08:56.465 "aliases": [ 00:08:56.465 "ed0bc556-5f68-4610-b238-1a0cf82bc59c" 00:08:56.465 ], 00:08:56.465 "product_name": "Raid Volume", 00:08:56.465 "block_size": 512, 00:08:56.465 "num_blocks": 131072, 00:08:56.465 "uuid": "ed0bc556-5f68-4610-b238-1a0cf82bc59c", 00:08:56.465 "assigned_rate_limits": { 00:08:56.465 "rw_ios_per_sec": 0, 00:08:56.465 "rw_mbytes_per_sec": 0, 00:08:56.465 "r_mbytes_per_sec": 0, 00:08:56.465 "w_mbytes_per_sec": 0 00:08:56.465 }, 00:08:56.465 "claimed": false, 00:08:56.465 "zoned": false, 00:08:56.465 "supported_io_types": { 00:08:56.465 "read": true, 00:08:56.465 "write": true, 00:08:56.465 "unmap": true, 00:08:56.465 "flush": true, 00:08:56.465 "reset": true, 00:08:56.465 "nvme_admin": false, 00:08:56.465 "nvme_io": false, 00:08:56.465 "nvme_io_md": false, 00:08:56.465 "write_zeroes": true, 00:08:56.465 "zcopy": false, 00:08:56.465 "get_zone_info": false, 00:08:56.465 "zone_management": false, 00:08:56.465 "zone_append": false, 00:08:56.465 "compare": false, 00:08:56.465 "compare_and_write": false, 00:08:56.465 "abort": false, 00:08:56.465 "seek_hole": false, 00:08:56.465 "seek_data": false, 00:08:56.465 "copy": false, 00:08:56.465 "nvme_iov_md": false 00:08:56.465 }, 00:08:56.465 "memory_domains": [ 00:08:56.465 { 00:08:56.465 "dma_device_id": "system", 00:08:56.465 "dma_device_type": 1 00:08:56.465 }, 00:08:56.465 { 00:08:56.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.465 "dma_device_type": 2 00:08:56.465 }, 00:08:56.465 { 00:08:56.465 "dma_device_id": "system", 00:08:56.465 "dma_device_type": 1 00:08:56.465 }, 00:08:56.465 { 00:08:56.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.465 "dma_device_type": 2 00:08:56.465 } 00:08:56.465 ], 00:08:56.465 "driver_specific": { 00:08:56.465 "raid": { 00:08:56.465 "uuid": "ed0bc556-5f68-4610-b238-1a0cf82bc59c", 00:08:56.465 "strip_size_kb": 64, 00:08:56.465 "state": "online", 00:08:56.465 "raid_level": "raid0", 00:08:56.465 "superblock": false, 00:08:56.465 "num_base_bdevs": 2, 00:08:56.465 "num_base_bdevs_discovered": 2, 00:08:56.465 "num_base_bdevs_operational": 2, 00:08:56.465 "base_bdevs_list": [ 00:08:56.465 { 00:08:56.465 "name": "BaseBdev1", 00:08:56.465 "uuid": "7ce64347-7442-43ab-910e-9342401e690f", 00:08:56.465 "is_configured": true, 00:08:56.465 "data_offset": 0, 00:08:56.465 "data_size": 65536 00:08:56.465 }, 00:08:56.465 { 00:08:56.465 "name": "BaseBdev2", 00:08:56.465 "uuid": "3fc89320-2299-45f3-827a-6359901ea2cc", 00:08:56.465 "is_configured": true, 00:08:56.465 "data_offset": 0, 00:08:56.465 "data_size": 65536 00:08:56.465 } 00:08:56.465 ] 00:08:56.465 } 00:08:56.465 } 00:08:56.465 }' 00:08:56.465 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:56.465 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:56.465 BaseBdev2' 00:08:56.723 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.723 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:56.723 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.723 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:56.723 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.723 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.723 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.723 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.723 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.723 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.723 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.723 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:56.723 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.723 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.723 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.723 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.723 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.723 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.723 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:56.723 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.723 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.723 [2024-11-05 16:23:09.713982] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:56.723 [2024-11-05 16:23:09.714025] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:56.723 [2024-11-05 16:23:09.714083] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:56.980 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.980 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:56.980 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:56.980 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:56.980 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:56.980 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:56.980 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:08:56.980 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.980 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:56.980 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:56.980 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.980 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:56.980 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.980 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.980 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.980 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.980 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.980 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.980 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.980 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.980 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.980 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.980 "name": "Existed_Raid", 00:08:56.980 "uuid": "ed0bc556-5f68-4610-b238-1a0cf82bc59c", 00:08:56.980 "strip_size_kb": 64, 00:08:56.980 "state": "offline", 00:08:56.980 "raid_level": "raid0", 00:08:56.980 "superblock": false, 00:08:56.980 "num_base_bdevs": 2, 00:08:56.980 "num_base_bdevs_discovered": 1, 00:08:56.980 "num_base_bdevs_operational": 1, 00:08:56.980 "base_bdevs_list": [ 00:08:56.980 { 00:08:56.980 "name": null, 00:08:56.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.980 "is_configured": false, 00:08:56.980 "data_offset": 0, 00:08:56.980 "data_size": 65536 00:08:56.980 }, 00:08:56.980 { 00:08:56.980 "name": "BaseBdev2", 00:08:56.980 "uuid": "3fc89320-2299-45f3-827a-6359901ea2cc", 00:08:56.980 "is_configured": true, 00:08:56.980 "data_offset": 0, 00:08:56.980 "data_size": 65536 00:08:56.980 } 00:08:56.980 ] 00:08:56.980 }' 00:08:56.980 16:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.980 16:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.238 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:57.238 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:57.238 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.238 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:57.238 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.238 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.238 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.238 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:57.238 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:57.238 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:57.238 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.238 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.238 [2024-11-05 16:23:10.283829] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:57.238 [2024-11-05 16:23:10.283894] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:57.498 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.498 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:57.498 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:57.498 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.498 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.498 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:57.498 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.498 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.498 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:57.499 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:57.499 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:57.499 16:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60926 00:08:57.499 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 60926 ']' 00:08:57.499 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 60926 00:08:57.499 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:08:57.499 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:57.499 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60926 00:08:57.499 killing process with pid 60926 00:08:57.499 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:57.499 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:57.499 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60926' 00:08:57.499 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 60926 00:08:57.499 [2024-11-05 16:23:10.471554] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:57.499 16:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 60926 00:08:57.499 [2024-11-05 16:23:10.489906] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:58.882 16:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:58.882 00:08:58.882 real 0m5.012s 00:08:58.882 user 0m7.227s 00:08:58.882 sys 0m0.775s 00:08:58.882 16:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:58.882 16:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.882 ************************************ 00:08:58.882 END TEST raid_state_function_test 00:08:58.882 ************************************ 00:08:58.882 16:23:11 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:08:58.882 16:23:11 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:58.882 16:23:11 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:58.882 16:23:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:58.882 ************************************ 00:08:58.882 START TEST raid_state_function_test_sb 00:08:58.882 ************************************ 00:08:58.882 16:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 2 true 00:08:58.882 16:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:58.882 16:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:58.882 16:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:58.882 16:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:58.882 16:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:58.882 16:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:58.882 16:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:58.882 16:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:58.882 16:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:58.882 16:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:58.882 16:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:58.882 16:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:58.882 16:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:58.882 16:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:58.882 16:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:58.882 16:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:58.882 16:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:58.882 16:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:58.882 16:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:58.882 16:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:58.882 16:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:58.882 16:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:58.882 16:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:58.882 16:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61179 00:08:58.882 Process raid pid: 61179 00:08:58.882 16:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61179' 00:08:58.882 16:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61179 00:08:58.882 16:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:58.882 16:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 61179 ']' 00:08:58.882 16:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.882 16:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:58.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.882 16:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.882 16:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:58.882 16:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.882 [2024-11-05 16:23:11.778223] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:08:58.882 [2024-11-05 16:23:11.778747] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:58.882 [2024-11-05 16:23:11.960130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.142 [2024-11-05 16:23:12.083879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.402 [2024-11-05 16:23:12.301618] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:59.402 [2024-11-05 16:23:12.301666] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:59.663 16:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:59.663 16:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:08:59.663 16:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:59.663 16:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.663 16:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.663 [2024-11-05 16:23:12.623222] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:59.663 [2024-11-05 16:23:12.623280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:59.663 [2024-11-05 16:23:12.623291] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:59.663 [2024-11-05 16:23:12.623301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:59.663 16:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.663 16:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:59.663 16:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.663 16:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.663 16:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.663 16:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.663 16:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:59.663 16:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.663 16:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.663 16:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.663 16:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.663 16:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.663 16:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.663 16:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.663 16:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.663 16:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.663 16:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.663 "name": "Existed_Raid", 00:08:59.663 "uuid": "aca3bdcb-8e69-410d-b956-1eee4fc60f08", 00:08:59.663 "strip_size_kb": 64, 00:08:59.663 "state": "configuring", 00:08:59.663 "raid_level": "raid0", 00:08:59.663 "superblock": true, 00:08:59.663 "num_base_bdevs": 2, 00:08:59.663 "num_base_bdevs_discovered": 0, 00:08:59.663 "num_base_bdevs_operational": 2, 00:08:59.663 "base_bdevs_list": [ 00:08:59.663 { 00:08:59.663 "name": "BaseBdev1", 00:08:59.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.663 "is_configured": false, 00:08:59.663 "data_offset": 0, 00:08:59.663 "data_size": 0 00:08:59.663 }, 00:08:59.663 { 00:08:59.663 "name": "BaseBdev2", 00:08:59.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.663 "is_configured": false, 00:08:59.663 "data_offset": 0, 00:08:59.663 "data_size": 0 00:08:59.663 } 00:08:59.663 ] 00:08:59.663 }' 00:08:59.663 16:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.663 16:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.232 [2024-11-05 16:23:13.066394] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:00.232 [2024-11-05 16:23:13.066435] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.232 [2024-11-05 16:23:13.078354] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:00.232 [2024-11-05 16:23:13.078393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:00.232 [2024-11-05 16:23:13.078402] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:00.232 [2024-11-05 16:23:13.078413] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.232 [2024-11-05 16:23:13.126474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:00.232 BaseBdev1 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.232 [ 00:09:00.232 { 00:09:00.232 "name": "BaseBdev1", 00:09:00.232 "aliases": [ 00:09:00.232 "482f3363-8aa1-4414-b75d-4c3c104d0628" 00:09:00.232 ], 00:09:00.232 "product_name": "Malloc disk", 00:09:00.232 "block_size": 512, 00:09:00.232 "num_blocks": 65536, 00:09:00.232 "uuid": "482f3363-8aa1-4414-b75d-4c3c104d0628", 00:09:00.232 "assigned_rate_limits": { 00:09:00.232 "rw_ios_per_sec": 0, 00:09:00.232 "rw_mbytes_per_sec": 0, 00:09:00.232 "r_mbytes_per_sec": 0, 00:09:00.232 "w_mbytes_per_sec": 0 00:09:00.232 }, 00:09:00.232 "claimed": true, 00:09:00.232 "claim_type": "exclusive_write", 00:09:00.232 "zoned": false, 00:09:00.232 "supported_io_types": { 00:09:00.232 "read": true, 00:09:00.232 "write": true, 00:09:00.232 "unmap": true, 00:09:00.232 "flush": true, 00:09:00.232 "reset": true, 00:09:00.232 "nvme_admin": false, 00:09:00.232 "nvme_io": false, 00:09:00.232 "nvme_io_md": false, 00:09:00.232 "write_zeroes": true, 00:09:00.232 "zcopy": true, 00:09:00.232 "get_zone_info": false, 00:09:00.232 "zone_management": false, 00:09:00.232 "zone_append": false, 00:09:00.232 "compare": false, 00:09:00.232 "compare_and_write": false, 00:09:00.232 "abort": true, 00:09:00.232 "seek_hole": false, 00:09:00.232 "seek_data": false, 00:09:00.232 "copy": true, 00:09:00.232 "nvme_iov_md": false 00:09:00.232 }, 00:09:00.232 "memory_domains": [ 00:09:00.232 { 00:09:00.232 "dma_device_id": "system", 00:09:00.232 "dma_device_type": 1 00:09:00.232 }, 00:09:00.232 { 00:09:00.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.232 "dma_device_type": 2 00:09:00.232 } 00:09:00.232 ], 00:09:00.232 "driver_specific": {} 00:09:00.232 } 00:09:00.232 ] 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.232 "name": "Existed_Raid", 00:09:00.232 "uuid": "c247b865-869e-4186-a9d7-a3c504dc918a", 00:09:00.232 "strip_size_kb": 64, 00:09:00.232 "state": "configuring", 00:09:00.232 "raid_level": "raid0", 00:09:00.232 "superblock": true, 00:09:00.232 "num_base_bdevs": 2, 00:09:00.232 "num_base_bdevs_discovered": 1, 00:09:00.232 "num_base_bdevs_operational": 2, 00:09:00.232 "base_bdevs_list": [ 00:09:00.232 { 00:09:00.232 "name": "BaseBdev1", 00:09:00.232 "uuid": "482f3363-8aa1-4414-b75d-4c3c104d0628", 00:09:00.232 "is_configured": true, 00:09:00.232 "data_offset": 2048, 00:09:00.232 "data_size": 63488 00:09:00.232 }, 00:09:00.232 { 00:09:00.232 "name": "BaseBdev2", 00:09:00.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.232 "is_configured": false, 00:09:00.232 "data_offset": 0, 00:09:00.232 "data_size": 0 00:09:00.232 } 00:09:00.232 ] 00:09:00.232 }' 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.232 16:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.800 16:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:00.800 16:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.800 16:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.800 [2024-11-05 16:23:13.621729] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:00.800 [2024-11-05 16:23:13.621790] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:00.800 16:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.800 16:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:00.801 16:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.801 16:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.801 [2024-11-05 16:23:13.633761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:00.801 [2024-11-05 16:23:13.635844] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:00.801 [2024-11-05 16:23:13.635885] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:00.801 16:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.801 16:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:00.801 16:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:00.801 16:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:00.801 16:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.801 16:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.801 16:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.801 16:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.801 16:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:00.801 16:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.801 16:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.801 16:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.801 16:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.801 16:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.801 16:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.801 16:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.801 16:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.801 16:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.801 16:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.801 "name": "Existed_Raid", 00:09:00.801 "uuid": "fd29d495-4ce2-4c05-9110-05ba6f559b01", 00:09:00.801 "strip_size_kb": 64, 00:09:00.801 "state": "configuring", 00:09:00.801 "raid_level": "raid0", 00:09:00.801 "superblock": true, 00:09:00.801 "num_base_bdevs": 2, 00:09:00.801 "num_base_bdevs_discovered": 1, 00:09:00.801 "num_base_bdevs_operational": 2, 00:09:00.801 "base_bdevs_list": [ 00:09:00.801 { 00:09:00.801 "name": "BaseBdev1", 00:09:00.801 "uuid": "482f3363-8aa1-4414-b75d-4c3c104d0628", 00:09:00.801 "is_configured": true, 00:09:00.801 "data_offset": 2048, 00:09:00.801 "data_size": 63488 00:09:00.801 }, 00:09:00.801 { 00:09:00.801 "name": "BaseBdev2", 00:09:00.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.801 "is_configured": false, 00:09:00.801 "data_offset": 0, 00:09:00.801 "data_size": 0 00:09:00.801 } 00:09:00.801 ] 00:09:00.801 }' 00:09:00.801 16:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.801 16:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.087 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:01.087 16:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.087 16:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.087 [2024-11-05 16:23:14.076117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:01.087 [2024-11-05 16:23:14.076424] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:01.087 [2024-11-05 16:23:14.076447] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:01.087 [2024-11-05 16:23:14.076814] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:01.087 BaseBdev2 00:09:01.087 [2024-11-05 16:23:14.077002] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:01.087 [2024-11-05 16:23:14.077023] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:01.087 [2024-11-05 16:23:14.077209] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:01.087 16:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.087 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:01.087 16:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:01.087 16:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:01.087 16:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:01.087 16:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:01.087 16:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:01.087 16:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:01.087 16:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.087 16:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.087 16:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.087 16:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:01.087 16:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.087 16:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.087 [ 00:09:01.087 { 00:09:01.087 "name": "BaseBdev2", 00:09:01.087 "aliases": [ 00:09:01.087 "f539093e-ee61-4040-8985-9cebe94fb6d6" 00:09:01.087 ], 00:09:01.087 "product_name": "Malloc disk", 00:09:01.087 "block_size": 512, 00:09:01.087 "num_blocks": 65536, 00:09:01.087 "uuid": "f539093e-ee61-4040-8985-9cebe94fb6d6", 00:09:01.087 "assigned_rate_limits": { 00:09:01.087 "rw_ios_per_sec": 0, 00:09:01.087 "rw_mbytes_per_sec": 0, 00:09:01.087 "r_mbytes_per_sec": 0, 00:09:01.087 "w_mbytes_per_sec": 0 00:09:01.087 }, 00:09:01.087 "claimed": true, 00:09:01.087 "claim_type": "exclusive_write", 00:09:01.088 "zoned": false, 00:09:01.088 "supported_io_types": { 00:09:01.088 "read": true, 00:09:01.088 "write": true, 00:09:01.088 "unmap": true, 00:09:01.088 "flush": true, 00:09:01.088 "reset": true, 00:09:01.088 "nvme_admin": false, 00:09:01.088 "nvme_io": false, 00:09:01.088 "nvme_io_md": false, 00:09:01.088 "write_zeroes": true, 00:09:01.088 "zcopy": true, 00:09:01.088 "get_zone_info": false, 00:09:01.088 "zone_management": false, 00:09:01.088 "zone_append": false, 00:09:01.088 "compare": false, 00:09:01.088 "compare_and_write": false, 00:09:01.088 "abort": true, 00:09:01.088 "seek_hole": false, 00:09:01.088 "seek_data": false, 00:09:01.088 "copy": true, 00:09:01.088 "nvme_iov_md": false 00:09:01.088 }, 00:09:01.088 "memory_domains": [ 00:09:01.088 { 00:09:01.088 "dma_device_id": "system", 00:09:01.088 "dma_device_type": 1 00:09:01.088 }, 00:09:01.088 { 00:09:01.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.088 "dma_device_type": 2 00:09:01.088 } 00:09:01.088 ], 00:09:01.088 "driver_specific": {} 00:09:01.088 } 00:09:01.088 ] 00:09:01.088 16:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.088 16:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:01.088 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:01.088 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:01.088 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:09:01.088 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.088 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:01.088 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:01.088 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.088 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:01.088 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.088 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.088 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.088 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.088 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.088 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.088 16:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.088 16:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.088 16:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.088 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.088 "name": "Existed_Raid", 00:09:01.088 "uuid": "fd29d495-4ce2-4c05-9110-05ba6f559b01", 00:09:01.088 "strip_size_kb": 64, 00:09:01.088 "state": "online", 00:09:01.088 "raid_level": "raid0", 00:09:01.088 "superblock": true, 00:09:01.088 "num_base_bdevs": 2, 00:09:01.088 "num_base_bdevs_discovered": 2, 00:09:01.088 "num_base_bdevs_operational": 2, 00:09:01.088 "base_bdevs_list": [ 00:09:01.088 { 00:09:01.088 "name": "BaseBdev1", 00:09:01.088 "uuid": "482f3363-8aa1-4414-b75d-4c3c104d0628", 00:09:01.088 "is_configured": true, 00:09:01.088 "data_offset": 2048, 00:09:01.088 "data_size": 63488 00:09:01.088 }, 00:09:01.088 { 00:09:01.088 "name": "BaseBdev2", 00:09:01.088 "uuid": "f539093e-ee61-4040-8985-9cebe94fb6d6", 00:09:01.088 "is_configured": true, 00:09:01.088 "data_offset": 2048, 00:09:01.088 "data_size": 63488 00:09:01.088 } 00:09:01.088 ] 00:09:01.088 }' 00:09:01.088 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.088 16:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.764 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:01.764 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:01.764 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:01.764 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:01.764 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:01.764 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:01.764 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:01.764 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:01.764 16:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.764 16:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.764 [2024-11-05 16:23:14.547679] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:01.764 16:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.764 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:01.764 "name": "Existed_Raid", 00:09:01.764 "aliases": [ 00:09:01.764 "fd29d495-4ce2-4c05-9110-05ba6f559b01" 00:09:01.764 ], 00:09:01.764 "product_name": "Raid Volume", 00:09:01.764 "block_size": 512, 00:09:01.764 "num_blocks": 126976, 00:09:01.764 "uuid": "fd29d495-4ce2-4c05-9110-05ba6f559b01", 00:09:01.764 "assigned_rate_limits": { 00:09:01.764 "rw_ios_per_sec": 0, 00:09:01.764 "rw_mbytes_per_sec": 0, 00:09:01.764 "r_mbytes_per_sec": 0, 00:09:01.764 "w_mbytes_per_sec": 0 00:09:01.764 }, 00:09:01.764 "claimed": false, 00:09:01.764 "zoned": false, 00:09:01.765 "supported_io_types": { 00:09:01.765 "read": true, 00:09:01.765 "write": true, 00:09:01.765 "unmap": true, 00:09:01.765 "flush": true, 00:09:01.765 "reset": true, 00:09:01.765 "nvme_admin": false, 00:09:01.765 "nvme_io": false, 00:09:01.765 "nvme_io_md": false, 00:09:01.765 "write_zeroes": true, 00:09:01.765 "zcopy": false, 00:09:01.765 "get_zone_info": false, 00:09:01.765 "zone_management": false, 00:09:01.765 "zone_append": false, 00:09:01.765 "compare": false, 00:09:01.765 "compare_and_write": false, 00:09:01.765 "abort": false, 00:09:01.765 "seek_hole": false, 00:09:01.765 "seek_data": false, 00:09:01.765 "copy": false, 00:09:01.765 "nvme_iov_md": false 00:09:01.765 }, 00:09:01.765 "memory_domains": [ 00:09:01.765 { 00:09:01.765 "dma_device_id": "system", 00:09:01.765 "dma_device_type": 1 00:09:01.765 }, 00:09:01.765 { 00:09:01.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.765 "dma_device_type": 2 00:09:01.765 }, 00:09:01.765 { 00:09:01.765 "dma_device_id": "system", 00:09:01.765 "dma_device_type": 1 00:09:01.765 }, 00:09:01.765 { 00:09:01.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.765 "dma_device_type": 2 00:09:01.765 } 00:09:01.765 ], 00:09:01.765 "driver_specific": { 00:09:01.765 "raid": { 00:09:01.765 "uuid": "fd29d495-4ce2-4c05-9110-05ba6f559b01", 00:09:01.765 "strip_size_kb": 64, 00:09:01.765 "state": "online", 00:09:01.765 "raid_level": "raid0", 00:09:01.765 "superblock": true, 00:09:01.765 "num_base_bdevs": 2, 00:09:01.765 "num_base_bdevs_discovered": 2, 00:09:01.765 "num_base_bdevs_operational": 2, 00:09:01.765 "base_bdevs_list": [ 00:09:01.765 { 00:09:01.765 "name": "BaseBdev1", 00:09:01.765 "uuid": "482f3363-8aa1-4414-b75d-4c3c104d0628", 00:09:01.765 "is_configured": true, 00:09:01.765 "data_offset": 2048, 00:09:01.765 "data_size": 63488 00:09:01.765 }, 00:09:01.765 { 00:09:01.765 "name": "BaseBdev2", 00:09:01.765 "uuid": "f539093e-ee61-4040-8985-9cebe94fb6d6", 00:09:01.765 "is_configured": true, 00:09:01.765 "data_offset": 2048, 00:09:01.765 "data_size": 63488 00:09:01.765 } 00:09:01.765 ] 00:09:01.765 } 00:09:01.765 } 00:09:01.765 }' 00:09:01.765 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:01.765 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:01.765 BaseBdev2' 00:09:01.765 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.765 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:01.765 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.765 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:01.765 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.765 16:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.765 16:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.765 16:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.765 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.765 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.765 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.765 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:01.765 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.765 16:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.765 16:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.765 16:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.765 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.765 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.765 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:01.765 16:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.765 16:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.765 [2024-11-05 16:23:14.803073] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:01.765 [2024-11-05 16:23:14.803120] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:01.765 [2024-11-05 16:23:14.803195] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:02.025 16:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.025 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:02.025 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:02.025 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:02.025 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:02.025 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:02.025 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:09:02.025 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.025 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:02.025 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:02.025 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.025 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:02.025 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.025 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.025 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.025 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.025 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.025 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.025 16:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.025 16:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.025 16:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.025 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.025 "name": "Existed_Raid", 00:09:02.025 "uuid": "fd29d495-4ce2-4c05-9110-05ba6f559b01", 00:09:02.025 "strip_size_kb": 64, 00:09:02.025 "state": "offline", 00:09:02.025 "raid_level": "raid0", 00:09:02.025 "superblock": true, 00:09:02.025 "num_base_bdevs": 2, 00:09:02.025 "num_base_bdevs_discovered": 1, 00:09:02.025 "num_base_bdevs_operational": 1, 00:09:02.025 "base_bdevs_list": [ 00:09:02.025 { 00:09:02.025 "name": null, 00:09:02.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.025 "is_configured": false, 00:09:02.025 "data_offset": 0, 00:09:02.025 "data_size": 63488 00:09:02.025 }, 00:09:02.025 { 00:09:02.025 "name": "BaseBdev2", 00:09:02.025 "uuid": "f539093e-ee61-4040-8985-9cebe94fb6d6", 00:09:02.025 "is_configured": true, 00:09:02.025 "data_offset": 2048, 00:09:02.025 "data_size": 63488 00:09:02.025 } 00:09:02.025 ] 00:09:02.025 }' 00:09:02.025 16:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.025 16:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.284 16:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:02.284 16:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:02.284 16:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.284 16:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:02.284 16:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.284 16:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.545 16:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.545 16:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:02.545 16:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:02.545 16:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:02.545 16:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.545 16:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.545 [2024-11-05 16:23:15.418313] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:02.545 [2024-11-05 16:23:15.418376] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:02.545 16:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.545 16:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:02.545 16:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:02.545 16:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.545 16:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.545 16:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:02.545 16:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.545 16:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.545 16:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:02.545 16:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:02.545 16:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:02.545 16:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61179 00:09:02.545 16:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 61179 ']' 00:09:02.545 16:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 61179 00:09:02.545 16:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:09:02.545 16:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:02.545 16:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61179 00:09:02.545 16:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:02.545 16:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:02.545 killing process with pid 61179 00:09:02.545 16:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61179' 00:09:02.545 16:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 61179 00:09:02.545 [2024-11-05 16:23:15.599160] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:02.545 16:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 61179 00:09:02.545 [2024-11-05 16:23:15.617287] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:03.926 16:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:03.926 00:09:03.926 real 0m5.065s 00:09:03.926 user 0m7.372s 00:09:03.926 sys 0m0.785s 00:09:03.926 16:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:03.926 16:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.926 ************************************ 00:09:03.926 END TEST raid_state_function_test_sb 00:09:03.926 ************************************ 00:09:03.926 16:23:16 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:09:03.926 16:23:16 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:03.926 16:23:16 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:03.926 16:23:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:03.926 ************************************ 00:09:03.926 START TEST raid_superblock_test 00:09:03.926 ************************************ 00:09:03.926 16:23:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 2 00:09:03.926 16:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:03.926 16:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:03.926 16:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:03.926 16:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:03.926 16:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:03.926 16:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:03.926 16:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:03.926 16:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:03.926 16:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:03.926 16:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:03.926 16:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:03.926 16:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:03.926 16:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:03.926 16:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:03.926 16:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:03.926 16:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:03.926 16:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61426 00:09:03.926 16:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:03.926 16:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61426 00:09:03.926 16:23:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 61426 ']' 00:09:03.926 16:23:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.926 16:23:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:03.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.926 16:23:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.926 16:23:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:03.926 16:23:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.926 [2024-11-05 16:23:16.907107] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:09:03.926 [2024-11-05 16:23:16.907222] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61426 ] 00:09:04.186 [2024-11-05 16:23:17.065479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.186 [2024-11-05 16:23:17.183086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.446 [2024-11-05 16:23:17.391598] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:04.446 [2024-11-05 16:23:17.391675] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:04.706 16:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:04.706 16:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:09:04.706 16:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:04.706 16:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:04.706 16:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:04.706 16:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:04.706 16:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:04.706 16:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:04.706 16:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:04.706 16:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:04.706 16:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:04.706 16:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.706 16:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.965 malloc1 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.965 [2024-11-05 16:23:17.807794] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:04.965 [2024-11-05 16:23:17.807864] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.965 [2024-11-05 16:23:17.807890] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:04.965 [2024-11-05 16:23:17.807901] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.965 [2024-11-05 16:23:17.810372] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.965 [2024-11-05 16:23:17.810411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:04.965 pt1 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.965 malloc2 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.965 [2024-11-05 16:23:17.866076] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:04.965 [2024-11-05 16:23:17.866132] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.965 [2024-11-05 16:23:17.866157] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:04.965 [2024-11-05 16:23:17.866166] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.965 [2024-11-05 16:23:17.868428] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.965 [2024-11-05 16:23:17.868462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:04.965 pt2 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.965 [2024-11-05 16:23:17.878102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:04.965 [2024-11-05 16:23:17.879905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:04.965 [2024-11-05 16:23:17.880060] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:04.965 [2024-11-05 16:23:17.880073] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:04.965 [2024-11-05 16:23:17.880330] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:04.965 [2024-11-05 16:23:17.880516] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:04.965 [2024-11-05 16:23:17.880554] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:04.965 [2024-11-05 16:23:17.880723] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.965 16:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.965 "name": "raid_bdev1", 00:09:04.966 "uuid": "880242cc-98c6-4a37-ab92-56015ab2dce3", 00:09:04.966 "strip_size_kb": 64, 00:09:04.966 "state": "online", 00:09:04.966 "raid_level": "raid0", 00:09:04.966 "superblock": true, 00:09:04.966 "num_base_bdevs": 2, 00:09:04.966 "num_base_bdevs_discovered": 2, 00:09:04.966 "num_base_bdevs_operational": 2, 00:09:04.966 "base_bdevs_list": [ 00:09:04.966 { 00:09:04.966 "name": "pt1", 00:09:04.966 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:04.966 "is_configured": true, 00:09:04.966 "data_offset": 2048, 00:09:04.966 "data_size": 63488 00:09:04.966 }, 00:09:04.966 { 00:09:04.966 "name": "pt2", 00:09:04.966 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:04.966 "is_configured": true, 00:09:04.966 "data_offset": 2048, 00:09:04.966 "data_size": 63488 00:09:04.966 } 00:09:04.966 ] 00:09:04.966 }' 00:09:04.966 16:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.966 16:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.534 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:05.534 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:05.534 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:05.534 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:05.534 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:05.534 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:05.534 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:05.534 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.534 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.534 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:05.534 [2024-11-05 16:23:18.329683] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:05.534 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.534 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:05.534 "name": "raid_bdev1", 00:09:05.534 "aliases": [ 00:09:05.534 "880242cc-98c6-4a37-ab92-56015ab2dce3" 00:09:05.534 ], 00:09:05.534 "product_name": "Raid Volume", 00:09:05.534 "block_size": 512, 00:09:05.534 "num_blocks": 126976, 00:09:05.534 "uuid": "880242cc-98c6-4a37-ab92-56015ab2dce3", 00:09:05.534 "assigned_rate_limits": { 00:09:05.534 "rw_ios_per_sec": 0, 00:09:05.534 "rw_mbytes_per_sec": 0, 00:09:05.534 "r_mbytes_per_sec": 0, 00:09:05.534 "w_mbytes_per_sec": 0 00:09:05.534 }, 00:09:05.534 "claimed": false, 00:09:05.534 "zoned": false, 00:09:05.534 "supported_io_types": { 00:09:05.534 "read": true, 00:09:05.534 "write": true, 00:09:05.534 "unmap": true, 00:09:05.534 "flush": true, 00:09:05.534 "reset": true, 00:09:05.534 "nvme_admin": false, 00:09:05.534 "nvme_io": false, 00:09:05.534 "nvme_io_md": false, 00:09:05.534 "write_zeroes": true, 00:09:05.534 "zcopy": false, 00:09:05.534 "get_zone_info": false, 00:09:05.534 "zone_management": false, 00:09:05.534 "zone_append": false, 00:09:05.534 "compare": false, 00:09:05.534 "compare_and_write": false, 00:09:05.534 "abort": false, 00:09:05.534 "seek_hole": false, 00:09:05.534 "seek_data": false, 00:09:05.534 "copy": false, 00:09:05.534 "nvme_iov_md": false 00:09:05.534 }, 00:09:05.534 "memory_domains": [ 00:09:05.534 { 00:09:05.534 "dma_device_id": "system", 00:09:05.534 "dma_device_type": 1 00:09:05.534 }, 00:09:05.534 { 00:09:05.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.534 "dma_device_type": 2 00:09:05.534 }, 00:09:05.534 { 00:09:05.534 "dma_device_id": "system", 00:09:05.534 "dma_device_type": 1 00:09:05.534 }, 00:09:05.534 { 00:09:05.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.534 "dma_device_type": 2 00:09:05.534 } 00:09:05.534 ], 00:09:05.534 "driver_specific": { 00:09:05.534 "raid": { 00:09:05.534 "uuid": "880242cc-98c6-4a37-ab92-56015ab2dce3", 00:09:05.534 "strip_size_kb": 64, 00:09:05.534 "state": "online", 00:09:05.534 "raid_level": "raid0", 00:09:05.534 "superblock": true, 00:09:05.534 "num_base_bdevs": 2, 00:09:05.534 "num_base_bdevs_discovered": 2, 00:09:05.534 "num_base_bdevs_operational": 2, 00:09:05.534 "base_bdevs_list": [ 00:09:05.534 { 00:09:05.534 "name": "pt1", 00:09:05.534 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:05.534 "is_configured": true, 00:09:05.534 "data_offset": 2048, 00:09:05.534 "data_size": 63488 00:09:05.534 }, 00:09:05.534 { 00:09:05.534 "name": "pt2", 00:09:05.534 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:05.534 "is_configured": true, 00:09:05.534 "data_offset": 2048, 00:09:05.534 "data_size": 63488 00:09:05.534 } 00:09:05.534 ] 00:09:05.534 } 00:09:05.534 } 00:09:05.534 }' 00:09:05.534 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:05.534 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:05.534 pt2' 00:09:05.534 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.534 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:05.534 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.534 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:05.534 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.534 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.535 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.535 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.535 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.535 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.535 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.535 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.535 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:05.535 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.535 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.535 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.535 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.535 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.535 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:05.535 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:05.535 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.535 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.535 [2024-11-05 16:23:18.545273] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:05.535 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.535 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=880242cc-98c6-4a37-ab92-56015ab2dce3 00:09:05.535 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 880242cc-98c6-4a37-ab92-56015ab2dce3 ']' 00:09:05.535 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:05.535 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.535 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.535 [2024-11-05 16:23:18.584856] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:05.535 [2024-11-05 16:23:18.584886] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:05.535 [2024-11-05 16:23:18.584978] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:05.535 [2024-11-05 16:23:18.585027] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:05.535 [2024-11-05 16:23:18.585039] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:05.535 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.535 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.535 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:05.535 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.535 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.535 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.795 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:05.795 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:05.795 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:05.795 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:05.795 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.795 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.795 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.795 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:05.795 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:05.795 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.795 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.795 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.795 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:05.795 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.795 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:05.795 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.795 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.795 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.796 [2024-11-05 16:23:18.712757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:05.796 [2024-11-05 16:23:18.714883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:05.796 [2024-11-05 16:23:18.714972] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:05.796 [2024-11-05 16:23:18.715025] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:05.796 [2024-11-05 16:23:18.715042] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:05.796 [2024-11-05 16:23:18.715055] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:05.796 request: 00:09:05.796 { 00:09:05.796 "name": "raid_bdev1", 00:09:05.796 "raid_level": "raid0", 00:09:05.796 "base_bdevs": [ 00:09:05.796 "malloc1", 00:09:05.796 "malloc2" 00:09:05.796 ], 00:09:05.796 "strip_size_kb": 64, 00:09:05.796 "superblock": false, 00:09:05.796 "method": "bdev_raid_create", 00:09:05.796 "req_id": 1 00:09:05.796 } 00:09:05.796 Got JSON-RPC error response 00:09:05.796 response: 00:09:05.796 { 00:09:05.796 "code": -17, 00:09:05.796 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:05.796 } 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.796 [2024-11-05 16:23:18.772698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:05.796 [2024-11-05 16:23:18.772757] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.796 [2024-11-05 16:23:18.772780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:05.796 [2024-11-05 16:23:18.772792] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.796 [2024-11-05 16:23:18.775079] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.796 [2024-11-05 16:23:18.775113] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:05.796 [2024-11-05 16:23:18.775219] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:05.796 [2024-11-05 16:23:18.775287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:05.796 pt1 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.796 "name": "raid_bdev1", 00:09:05.796 "uuid": "880242cc-98c6-4a37-ab92-56015ab2dce3", 00:09:05.796 "strip_size_kb": 64, 00:09:05.796 "state": "configuring", 00:09:05.796 "raid_level": "raid0", 00:09:05.796 "superblock": true, 00:09:05.796 "num_base_bdevs": 2, 00:09:05.796 "num_base_bdevs_discovered": 1, 00:09:05.796 "num_base_bdevs_operational": 2, 00:09:05.796 "base_bdevs_list": [ 00:09:05.796 { 00:09:05.796 "name": "pt1", 00:09:05.796 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:05.796 "is_configured": true, 00:09:05.796 "data_offset": 2048, 00:09:05.796 "data_size": 63488 00:09:05.796 }, 00:09:05.796 { 00:09:05.796 "name": null, 00:09:05.796 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:05.796 "is_configured": false, 00:09:05.796 "data_offset": 2048, 00:09:05.796 "data_size": 63488 00:09:05.796 } 00:09:05.796 ] 00:09:05.796 }' 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.796 16:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.367 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:06.367 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:06.367 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:06.367 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:06.367 16:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.367 16:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.367 [2024-11-05 16:23:19.156696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:06.367 [2024-11-05 16:23:19.156767] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:06.367 [2024-11-05 16:23:19.156788] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:06.367 [2024-11-05 16:23:19.156799] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:06.367 [2024-11-05 16:23:19.157305] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:06.367 [2024-11-05 16:23:19.157333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:06.367 [2024-11-05 16:23:19.157427] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:06.367 [2024-11-05 16:23:19.157456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:06.367 [2024-11-05 16:23:19.157595] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:06.367 [2024-11-05 16:23:19.157608] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:06.367 [2024-11-05 16:23:19.157861] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:06.367 [2024-11-05 16:23:19.158026] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:06.367 [2024-11-05 16:23:19.158042] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:06.367 [2024-11-05 16:23:19.158186] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:06.367 pt2 00:09:06.367 16:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.367 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:06.367 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:06.367 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:06.367 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:06.367 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:06.367 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:06.367 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.367 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:06.367 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.367 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.367 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.367 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.367 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.367 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:06.367 16:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.367 16:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.367 16:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.367 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.367 "name": "raid_bdev1", 00:09:06.367 "uuid": "880242cc-98c6-4a37-ab92-56015ab2dce3", 00:09:06.367 "strip_size_kb": 64, 00:09:06.367 "state": "online", 00:09:06.367 "raid_level": "raid0", 00:09:06.367 "superblock": true, 00:09:06.367 "num_base_bdevs": 2, 00:09:06.367 "num_base_bdevs_discovered": 2, 00:09:06.367 "num_base_bdevs_operational": 2, 00:09:06.367 "base_bdevs_list": [ 00:09:06.367 { 00:09:06.367 "name": "pt1", 00:09:06.367 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:06.367 "is_configured": true, 00:09:06.367 "data_offset": 2048, 00:09:06.367 "data_size": 63488 00:09:06.367 }, 00:09:06.367 { 00:09:06.367 "name": "pt2", 00:09:06.367 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:06.367 "is_configured": true, 00:09:06.367 "data_offset": 2048, 00:09:06.367 "data_size": 63488 00:09:06.367 } 00:09:06.367 ] 00:09:06.367 }' 00:09:06.367 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.367 16:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.628 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:06.628 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:06.628 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:06.628 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:06.628 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:06.628 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:06.628 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:06.628 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:06.628 16:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.628 16:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.628 [2024-11-05 16:23:19.656957] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:06.628 16:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.628 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:06.628 "name": "raid_bdev1", 00:09:06.628 "aliases": [ 00:09:06.628 "880242cc-98c6-4a37-ab92-56015ab2dce3" 00:09:06.628 ], 00:09:06.628 "product_name": "Raid Volume", 00:09:06.628 "block_size": 512, 00:09:06.628 "num_blocks": 126976, 00:09:06.628 "uuid": "880242cc-98c6-4a37-ab92-56015ab2dce3", 00:09:06.628 "assigned_rate_limits": { 00:09:06.628 "rw_ios_per_sec": 0, 00:09:06.628 "rw_mbytes_per_sec": 0, 00:09:06.628 "r_mbytes_per_sec": 0, 00:09:06.628 "w_mbytes_per_sec": 0 00:09:06.628 }, 00:09:06.628 "claimed": false, 00:09:06.628 "zoned": false, 00:09:06.628 "supported_io_types": { 00:09:06.628 "read": true, 00:09:06.628 "write": true, 00:09:06.628 "unmap": true, 00:09:06.628 "flush": true, 00:09:06.628 "reset": true, 00:09:06.628 "nvme_admin": false, 00:09:06.628 "nvme_io": false, 00:09:06.628 "nvme_io_md": false, 00:09:06.628 "write_zeroes": true, 00:09:06.628 "zcopy": false, 00:09:06.628 "get_zone_info": false, 00:09:06.628 "zone_management": false, 00:09:06.628 "zone_append": false, 00:09:06.628 "compare": false, 00:09:06.628 "compare_and_write": false, 00:09:06.628 "abort": false, 00:09:06.628 "seek_hole": false, 00:09:06.628 "seek_data": false, 00:09:06.628 "copy": false, 00:09:06.628 "nvme_iov_md": false 00:09:06.628 }, 00:09:06.628 "memory_domains": [ 00:09:06.628 { 00:09:06.628 "dma_device_id": "system", 00:09:06.628 "dma_device_type": 1 00:09:06.628 }, 00:09:06.628 { 00:09:06.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.628 "dma_device_type": 2 00:09:06.628 }, 00:09:06.628 { 00:09:06.628 "dma_device_id": "system", 00:09:06.628 "dma_device_type": 1 00:09:06.628 }, 00:09:06.628 { 00:09:06.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.628 "dma_device_type": 2 00:09:06.628 } 00:09:06.628 ], 00:09:06.628 "driver_specific": { 00:09:06.628 "raid": { 00:09:06.628 "uuid": "880242cc-98c6-4a37-ab92-56015ab2dce3", 00:09:06.628 "strip_size_kb": 64, 00:09:06.628 "state": "online", 00:09:06.628 "raid_level": "raid0", 00:09:06.628 "superblock": true, 00:09:06.628 "num_base_bdevs": 2, 00:09:06.628 "num_base_bdevs_discovered": 2, 00:09:06.628 "num_base_bdevs_operational": 2, 00:09:06.628 "base_bdevs_list": [ 00:09:06.628 { 00:09:06.628 "name": "pt1", 00:09:06.628 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:06.628 "is_configured": true, 00:09:06.628 "data_offset": 2048, 00:09:06.628 "data_size": 63488 00:09:06.628 }, 00:09:06.628 { 00:09:06.628 "name": "pt2", 00:09:06.628 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:06.628 "is_configured": true, 00:09:06.628 "data_offset": 2048, 00:09:06.628 "data_size": 63488 00:09:06.628 } 00:09:06.628 ] 00:09:06.628 } 00:09:06.628 } 00:09:06.628 }' 00:09:06.628 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:06.888 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:06.888 pt2' 00:09:06.888 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.889 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:06.889 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.889 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.889 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:06.889 16:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.889 16:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.889 16:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.889 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.889 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.889 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.889 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.889 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:06.889 16:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.889 16:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.889 16:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.889 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.889 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.889 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:06.889 16:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.889 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:06.889 16:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.889 [2024-11-05 16:23:19.884993] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:06.889 16:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.889 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 880242cc-98c6-4a37-ab92-56015ab2dce3 '!=' 880242cc-98c6-4a37-ab92-56015ab2dce3 ']' 00:09:06.889 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:06.889 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:06.889 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:06.889 16:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61426 00:09:06.889 16:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 61426 ']' 00:09:06.889 16:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 61426 00:09:06.889 16:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:09:06.889 16:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:06.889 16:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61426 00:09:06.889 16:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:06.889 16:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:06.889 killing process with pid 61426 00:09:06.889 16:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61426' 00:09:06.889 16:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 61426 00:09:06.889 [2024-11-05 16:23:19.967071] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:06.889 [2024-11-05 16:23:19.967215] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:06.889 16:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 61426 00:09:06.889 [2024-11-05 16:23:19.967281] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:06.889 [2024-11-05 16:23:19.967294] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:07.147 [2024-11-05 16:23:20.197176] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:08.530 16:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:08.530 00:09:08.530 real 0m4.535s 00:09:08.530 user 0m6.358s 00:09:08.530 sys 0m0.748s 00:09:08.530 16:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:08.530 16:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.530 ************************************ 00:09:08.530 END TEST raid_superblock_test 00:09:08.530 ************************************ 00:09:08.530 16:23:21 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:09:08.530 16:23:21 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:08.530 16:23:21 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:08.530 16:23:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:08.530 ************************************ 00:09:08.530 START TEST raid_read_error_test 00:09:08.530 ************************************ 00:09:08.530 16:23:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 2 read 00:09:08.530 16:23:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:08.530 16:23:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:08.530 16:23:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:08.530 16:23:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:08.530 16:23:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:08.530 16:23:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:08.530 16:23:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:08.530 16:23:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:08.530 16:23:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:08.530 16:23:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:08.530 16:23:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:08.530 16:23:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:08.530 16:23:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:08.530 16:23:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:08.530 16:23:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:08.530 16:23:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:08.530 16:23:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:08.530 16:23:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:08.530 16:23:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:08.530 16:23:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:08.530 16:23:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:08.530 16:23:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:08.530 16:23:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.eI2OuEoZsp 00:09:08.530 16:23:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61637 00:09:08.530 16:23:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61637 00:09:08.530 16:23:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 61637 ']' 00:09:08.530 16:23:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.530 16:23:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:08.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.530 16:23:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.530 16:23:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:08.530 16:23:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:08.530 16:23:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.530 [2024-11-05 16:23:21.507909] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:09:08.530 [2024-11-05 16:23:21.508040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61637 ] 00:09:08.789 [2024-11-05 16:23:21.684177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.789 [2024-11-05 16:23:21.804880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.048 [2024-11-05 16:23:22.011069] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:09.048 [2024-11-05 16:23:22.011124] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:09.306 16:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:09.306 16:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:09.306 16:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:09.306 16:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:09.306 16:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.306 16:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.565 BaseBdev1_malloc 00:09:09.565 16:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.565 16:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:09.565 16:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.565 16:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.565 true 00:09:09.565 16:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.565 16:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:09.565 16:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.565 16:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.565 [2024-11-05 16:23:22.417160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:09.565 [2024-11-05 16:23:22.417212] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:09.565 [2024-11-05 16:23:22.417233] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:09.565 [2024-11-05 16:23:22.417244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:09.565 [2024-11-05 16:23:22.419416] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:09.565 [2024-11-05 16:23:22.419451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:09.565 BaseBdev1 00:09:09.565 16:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.565 16:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:09.565 16:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:09.565 16:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.565 16:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.565 BaseBdev2_malloc 00:09:09.565 16:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.565 16:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:09.565 16:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.565 16:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.565 true 00:09:09.566 16:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.566 16:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:09.566 16:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.566 16:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.566 [2024-11-05 16:23:22.486882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:09.566 [2024-11-05 16:23:22.486933] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:09.566 [2024-11-05 16:23:22.486949] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:09.566 [2024-11-05 16:23:22.486960] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:09.566 [2024-11-05 16:23:22.489154] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:09.566 [2024-11-05 16:23:22.489192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:09.566 BaseBdev2 00:09:09.566 16:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.566 16:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:09.566 16:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.566 16:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.566 [2024-11-05 16:23:22.498937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:09.566 [2024-11-05 16:23:22.500990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:09.566 [2024-11-05 16:23:22.501210] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:09.566 [2024-11-05 16:23:22.501238] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:09.566 [2024-11-05 16:23:22.501499] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:09.566 [2024-11-05 16:23:22.501726] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:09.566 [2024-11-05 16:23:22.501748] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:09.566 [2024-11-05 16:23:22.501929] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:09.566 16:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.566 16:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:09.566 16:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:09.566 16:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:09.566 16:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.566 16:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.566 16:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:09.566 16:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.566 16:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.566 16:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.566 16:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.566 16:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.566 16:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:09.566 16:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.566 16:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.566 16:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.566 16:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.566 "name": "raid_bdev1", 00:09:09.566 "uuid": "30baca9e-23b9-4a0e-acaf-98c69a57c76f", 00:09:09.566 "strip_size_kb": 64, 00:09:09.566 "state": "online", 00:09:09.566 "raid_level": "raid0", 00:09:09.566 "superblock": true, 00:09:09.566 "num_base_bdevs": 2, 00:09:09.566 "num_base_bdevs_discovered": 2, 00:09:09.566 "num_base_bdevs_operational": 2, 00:09:09.566 "base_bdevs_list": [ 00:09:09.566 { 00:09:09.566 "name": "BaseBdev1", 00:09:09.566 "uuid": "7d77e858-1dbd-5559-9e8c-7c79fbb316c2", 00:09:09.566 "is_configured": true, 00:09:09.566 "data_offset": 2048, 00:09:09.566 "data_size": 63488 00:09:09.566 }, 00:09:09.566 { 00:09:09.566 "name": "BaseBdev2", 00:09:09.566 "uuid": "116c9cd0-2555-5ed1-911d-3dc57f2fbd49", 00:09:09.566 "is_configured": true, 00:09:09.566 "data_offset": 2048, 00:09:09.566 "data_size": 63488 00:09:09.566 } 00:09:09.566 ] 00:09:09.566 }' 00:09:09.566 16:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.566 16:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.136 16:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:10.136 16:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:10.136 [2024-11-05 16:23:23.031612] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:11.074 16:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:11.074 16:23:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.074 16:23:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.074 16:23:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.074 16:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:11.074 16:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:11.074 16:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:11.074 16:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:11.074 16:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:11.074 16:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:11.074 16:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.074 16:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.074 16:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:11.074 16:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.074 16:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.074 16:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.074 16:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.074 16:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.074 16:23:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.074 16:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:11.074 16:23:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.074 16:23:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.074 16:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.074 "name": "raid_bdev1", 00:09:11.074 "uuid": "30baca9e-23b9-4a0e-acaf-98c69a57c76f", 00:09:11.074 "strip_size_kb": 64, 00:09:11.074 "state": "online", 00:09:11.074 "raid_level": "raid0", 00:09:11.074 "superblock": true, 00:09:11.074 "num_base_bdevs": 2, 00:09:11.074 "num_base_bdevs_discovered": 2, 00:09:11.074 "num_base_bdevs_operational": 2, 00:09:11.074 "base_bdevs_list": [ 00:09:11.074 { 00:09:11.074 "name": "BaseBdev1", 00:09:11.074 "uuid": "7d77e858-1dbd-5559-9e8c-7c79fbb316c2", 00:09:11.074 "is_configured": true, 00:09:11.074 "data_offset": 2048, 00:09:11.074 "data_size": 63488 00:09:11.074 }, 00:09:11.074 { 00:09:11.074 "name": "BaseBdev2", 00:09:11.074 "uuid": "116c9cd0-2555-5ed1-911d-3dc57f2fbd49", 00:09:11.074 "is_configured": true, 00:09:11.074 "data_offset": 2048, 00:09:11.074 "data_size": 63488 00:09:11.074 } 00:09:11.074 ] 00:09:11.074 }' 00:09:11.074 16:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.074 16:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.641 16:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:11.641 16:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.641 16:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.641 [2024-11-05 16:23:24.427912] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:11.641 [2024-11-05 16:23:24.427949] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:11.641 [2024-11-05 16:23:24.431058] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:11.641 [2024-11-05 16:23:24.431102] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:11.641 [2024-11-05 16:23:24.431137] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:11.641 [2024-11-05 16:23:24.431148] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:11.641 { 00:09:11.641 "results": [ 00:09:11.641 { 00:09:11.641 "job": "raid_bdev1", 00:09:11.641 "core_mask": "0x1", 00:09:11.641 "workload": "randrw", 00:09:11.641 "percentage": 50, 00:09:11.641 "status": "finished", 00:09:11.641 "queue_depth": 1, 00:09:11.641 "io_size": 131072, 00:09:11.641 "runtime": 1.397037, 00:09:11.641 "iops": 15592.285673178305, 00:09:11.641 "mibps": 1949.0357091472881, 00:09:11.641 "io_failed": 1, 00:09:11.641 "io_timeout": 0, 00:09:11.641 "avg_latency_us": 89.02429538445749, 00:09:11.641 "min_latency_us": 26.1589519650655, 00:09:11.641 "max_latency_us": 1473.844541484716 00:09:11.641 } 00:09:11.641 ], 00:09:11.641 "core_count": 1 00:09:11.641 } 00:09:11.641 16:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.641 16:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61637 00:09:11.641 16:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 61637 ']' 00:09:11.641 16:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 61637 00:09:11.641 16:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:09:11.641 16:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:11.641 16:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61637 00:09:11.641 killing process with pid 61637 00:09:11.641 16:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:11.641 16:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:11.641 16:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61637' 00:09:11.641 16:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 61637 00:09:11.641 16:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 61637 00:09:11.641 [2024-11-05 16:23:24.473605] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:11.641 [2024-11-05 16:23:24.612648] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:13.015 16:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.eI2OuEoZsp 00:09:13.015 16:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:13.015 16:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:13.015 16:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:13.015 16:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:13.015 16:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:13.015 16:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:13.015 16:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:13.015 00:09:13.016 real 0m4.406s 00:09:13.016 user 0m5.314s 00:09:13.016 sys 0m0.539s 00:09:13.016 16:23:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:13.016 16:23:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.016 ************************************ 00:09:13.016 END TEST raid_read_error_test 00:09:13.016 ************************************ 00:09:13.016 16:23:25 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:09:13.016 16:23:25 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:13.016 16:23:25 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:13.016 16:23:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:13.016 ************************************ 00:09:13.016 START TEST raid_write_error_test 00:09:13.016 ************************************ 00:09:13.016 16:23:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 2 write 00:09:13.016 16:23:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:13.016 16:23:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:13.016 16:23:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:13.016 16:23:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:13.016 16:23:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:13.016 16:23:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:13.016 16:23:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:13.016 16:23:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:13.016 16:23:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:13.016 16:23:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:13.016 16:23:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:13.016 16:23:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:13.016 16:23:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:13.016 16:23:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:13.016 16:23:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:13.016 16:23:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:13.016 16:23:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:13.016 16:23:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:13.016 16:23:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:13.016 16:23:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:13.016 16:23:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:13.016 16:23:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:13.016 16:23:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.J2yqv0KU7g 00:09:13.016 16:23:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61777 00:09:13.016 16:23:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61777 00:09:13.016 16:23:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 61777 ']' 00:09:13.016 16:23:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.016 16:23:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:13.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.016 16:23:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.016 16:23:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:13.016 16:23:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.016 16:23:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:13.016 [2024-11-05 16:23:26.012397] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:09:13.016 [2024-11-05 16:23:26.012556] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61777 ] 00:09:13.273 [2024-11-05 16:23:26.191327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.273 [2024-11-05 16:23:26.309978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.531 [2024-11-05 16:23:26.511329] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.531 [2024-11-05 16:23:26.511403] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.790 16:23:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:13.790 16:23:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:13.790 16:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:13.790 16:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:13.790 16:23:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.790 16:23:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.049 BaseBdev1_malloc 00:09:14.049 16:23:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.049 16:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:14.049 16:23:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.049 16:23:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.049 true 00:09:14.049 16:23:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.049 16:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:14.049 16:23:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.049 16:23:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.049 [2024-11-05 16:23:26.891803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:14.049 [2024-11-05 16:23:26.891854] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.049 [2024-11-05 16:23:26.891873] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:14.049 [2024-11-05 16:23:26.891883] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.049 [2024-11-05 16:23:26.894088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.049 [2024-11-05 16:23:26.894126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:14.049 BaseBdev1 00:09:14.049 16:23:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.049 16:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:14.049 16:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:14.049 16:23:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.049 16:23:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.049 BaseBdev2_malloc 00:09:14.049 16:23:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.049 16:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:14.049 16:23:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.049 16:23:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.049 true 00:09:14.049 16:23:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.049 16:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:14.049 16:23:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.049 16:23:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.049 [2024-11-05 16:23:26.947007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:14.049 [2024-11-05 16:23:26.947058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.049 [2024-11-05 16:23:26.947075] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:14.049 [2024-11-05 16:23:26.947086] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.049 [2024-11-05 16:23:26.949214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.049 [2024-11-05 16:23:26.949264] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:14.049 BaseBdev2 00:09:14.049 16:23:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.049 16:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:14.049 16:23:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.049 16:23:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.049 [2024-11-05 16:23:26.955051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:14.049 [2024-11-05 16:23:26.956899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:14.049 [2024-11-05 16:23:26.957090] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:14.049 [2024-11-05 16:23:26.957114] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:14.049 [2024-11-05 16:23:26.957344] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:14.049 [2024-11-05 16:23:26.957542] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:14.049 [2024-11-05 16:23:26.957562] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:14.049 [2024-11-05 16:23:26.957715] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:14.049 16:23:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.049 16:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:14.049 16:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:14.049 16:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:14.049 16:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:14.049 16:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.049 16:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:14.049 16:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.049 16:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.049 16:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.049 16:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.049 16:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.049 16:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:14.049 16:23:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.049 16:23:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.049 16:23:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.049 16:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.049 "name": "raid_bdev1", 00:09:14.049 "uuid": "63e2d178-faa7-461d-8964-898955588183", 00:09:14.049 "strip_size_kb": 64, 00:09:14.049 "state": "online", 00:09:14.049 "raid_level": "raid0", 00:09:14.049 "superblock": true, 00:09:14.049 "num_base_bdevs": 2, 00:09:14.049 "num_base_bdevs_discovered": 2, 00:09:14.049 "num_base_bdevs_operational": 2, 00:09:14.049 "base_bdevs_list": [ 00:09:14.049 { 00:09:14.049 "name": "BaseBdev1", 00:09:14.049 "uuid": "b731b9f9-5354-5b5b-94e5-24c5cd9d9654", 00:09:14.049 "is_configured": true, 00:09:14.049 "data_offset": 2048, 00:09:14.049 "data_size": 63488 00:09:14.049 }, 00:09:14.049 { 00:09:14.049 "name": "BaseBdev2", 00:09:14.049 "uuid": "f968ba51-67a0-53ee-86a0-f04122b2fad0", 00:09:14.049 "is_configured": true, 00:09:14.049 "data_offset": 2048, 00:09:14.049 "data_size": 63488 00:09:14.049 } 00:09:14.049 ] 00:09:14.049 }' 00:09:14.049 16:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.049 16:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.313 16:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:14.313 16:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:14.577 [2024-11-05 16:23:27.491791] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:15.512 16:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:15.512 16:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.512 16:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.512 16:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.512 16:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:15.512 16:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:15.512 16:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:15.512 16:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:15.512 16:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:15.512 16:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.512 16:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.512 16:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.512 16:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:15.512 16:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.512 16:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.512 16:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.512 16:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.512 16:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.512 16:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.512 16:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:15.512 16:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.512 16:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.512 16:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.512 "name": "raid_bdev1", 00:09:15.512 "uuid": "63e2d178-faa7-461d-8964-898955588183", 00:09:15.512 "strip_size_kb": 64, 00:09:15.512 "state": "online", 00:09:15.512 "raid_level": "raid0", 00:09:15.512 "superblock": true, 00:09:15.512 "num_base_bdevs": 2, 00:09:15.512 "num_base_bdevs_discovered": 2, 00:09:15.512 "num_base_bdevs_operational": 2, 00:09:15.512 "base_bdevs_list": [ 00:09:15.512 { 00:09:15.512 "name": "BaseBdev1", 00:09:15.512 "uuid": "b731b9f9-5354-5b5b-94e5-24c5cd9d9654", 00:09:15.512 "is_configured": true, 00:09:15.512 "data_offset": 2048, 00:09:15.512 "data_size": 63488 00:09:15.512 }, 00:09:15.512 { 00:09:15.512 "name": "BaseBdev2", 00:09:15.512 "uuid": "f968ba51-67a0-53ee-86a0-f04122b2fad0", 00:09:15.512 "is_configured": true, 00:09:15.512 "data_offset": 2048, 00:09:15.512 "data_size": 63488 00:09:15.512 } 00:09:15.512 ] 00:09:15.512 }' 00:09:15.512 16:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.512 16:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.771 16:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:15.771 16:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.771 16:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.771 [2024-11-05 16:23:28.856278] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:15.771 [2024-11-05 16:23:28.856326] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:15.771 [2024-11-05 16:23:28.859422] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:15.771 [2024-11-05 16:23:28.859473] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.771 [2024-11-05 16:23:28.859505] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:15.771 [2024-11-05 16:23:28.859531] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:16.029 { 00:09:16.029 "results": [ 00:09:16.029 { 00:09:16.029 "job": "raid_bdev1", 00:09:16.029 "core_mask": "0x1", 00:09:16.029 "workload": "randrw", 00:09:16.029 "percentage": 50, 00:09:16.029 "status": "finished", 00:09:16.029 "queue_depth": 1, 00:09:16.029 "io_size": 131072, 00:09:16.029 "runtime": 1.365099, 00:09:16.029 "iops": 14864.123407899353, 00:09:16.029 "mibps": 1858.0154259874191, 00:09:16.029 "io_failed": 1, 00:09:16.029 "io_timeout": 0, 00:09:16.029 "avg_latency_us": 93.38109849472805, 00:09:16.029 "min_latency_us": 26.717903930131005, 00:09:16.029 "max_latency_us": 1988.9746724890829 00:09:16.029 } 00:09:16.029 ], 00:09:16.029 "core_count": 1 00:09:16.029 } 00:09:16.029 16:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.029 16:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61777 00:09:16.029 16:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 61777 ']' 00:09:16.029 16:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 61777 00:09:16.029 16:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:09:16.029 16:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:16.029 16:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61777 00:09:16.029 16:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:16.029 16:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:16.029 killing process with pid 61777 00:09:16.029 16:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61777' 00:09:16.029 16:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 61777 00:09:16.029 [2024-11-05 16:23:28.893373] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:16.029 16:23:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 61777 00:09:16.029 [2024-11-05 16:23:29.032193] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:17.406 16:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.J2yqv0KU7g 00:09:17.406 16:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:17.406 16:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:17.406 16:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:17.406 16:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:17.406 16:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:17.406 16:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:17.406 16:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:17.406 00:09:17.406 real 0m4.421s 00:09:17.406 user 0m5.292s 00:09:17.406 sys 0m0.537s 00:09:17.406 16:23:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:17.406 ************************************ 00:09:17.406 END TEST raid_write_error_test 00:09:17.406 ************************************ 00:09:17.406 16:23:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.406 16:23:30 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:17.406 16:23:30 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:09:17.406 16:23:30 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:17.406 16:23:30 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:17.406 16:23:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:17.406 ************************************ 00:09:17.406 START TEST raid_state_function_test 00:09:17.406 ************************************ 00:09:17.406 16:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 2 false 00:09:17.406 16:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:17.406 16:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:17.406 16:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:17.406 16:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:17.406 16:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:17.406 16:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:17.406 16:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:17.406 16:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:17.406 16:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:17.406 16:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:17.406 16:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:17.406 16:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:17.406 16:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:17.406 16:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:17.406 16:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:17.407 16:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:17.407 16:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:17.407 16:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:17.407 16:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:17.407 16:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:17.407 16:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:17.407 16:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:17.407 16:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:17.407 Process raid pid: 61922 00:09:17.407 16:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61922 00:09:17.407 16:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:17.407 16:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61922' 00:09:17.407 16:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61922 00:09:17.407 16:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 61922 ']' 00:09:17.407 16:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.407 16:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:17.407 16:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.407 16:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:17.407 16:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.407 [2024-11-05 16:23:30.453581] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:09:17.407 [2024-11-05 16:23:30.453793] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.666 [2024-11-05 16:23:30.630843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.666 [2024-11-05 16:23:30.752743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.924 [2024-11-05 16:23:30.975380] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.924 [2024-11-05 16:23:30.975428] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:18.490 16:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:18.490 16:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:09:18.490 16:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:18.490 16:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.490 16:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.490 [2024-11-05 16:23:31.311859] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:18.490 [2024-11-05 16:23:31.311910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:18.490 [2024-11-05 16:23:31.311928] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:18.490 [2024-11-05 16:23:31.311939] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:18.490 16:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.490 16:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:18.490 16:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.490 16:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.490 16:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.490 16:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.490 16:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:18.490 16:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.490 16:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.490 16:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.490 16:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.490 16:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.490 16:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.490 16:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.490 16:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.490 16:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.490 16:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.490 "name": "Existed_Raid", 00:09:18.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.490 "strip_size_kb": 64, 00:09:18.490 "state": "configuring", 00:09:18.490 "raid_level": "concat", 00:09:18.490 "superblock": false, 00:09:18.490 "num_base_bdevs": 2, 00:09:18.490 "num_base_bdevs_discovered": 0, 00:09:18.490 "num_base_bdevs_operational": 2, 00:09:18.490 "base_bdevs_list": [ 00:09:18.490 { 00:09:18.490 "name": "BaseBdev1", 00:09:18.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.490 "is_configured": false, 00:09:18.490 "data_offset": 0, 00:09:18.490 "data_size": 0 00:09:18.490 }, 00:09:18.490 { 00:09:18.490 "name": "BaseBdev2", 00:09:18.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.490 "is_configured": false, 00:09:18.490 "data_offset": 0, 00:09:18.490 "data_size": 0 00:09:18.490 } 00:09:18.490 ] 00:09:18.490 }' 00:09:18.490 16:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.490 16:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.749 16:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:18.749 16:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.749 16:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.749 [2024-11-05 16:23:31.747075] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:18.749 [2024-11-05 16:23:31.747113] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:18.749 16:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.749 16:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:18.749 16:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.749 16:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.749 [2024-11-05 16:23:31.759041] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:18.749 [2024-11-05 16:23:31.759083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:18.749 [2024-11-05 16:23:31.759093] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:18.749 [2024-11-05 16:23:31.759104] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:18.749 16:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.749 16:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:18.749 16:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.749 16:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.749 [2024-11-05 16:23:31.806637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:18.749 BaseBdev1 00:09:18.749 16:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.749 16:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:18.749 16:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:18.749 16:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:18.749 16:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:18.749 16:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:18.749 16:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:18.749 16:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:18.749 16:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.749 16:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.749 16:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.749 16:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:18.749 16:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.749 16:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.749 [ 00:09:18.749 { 00:09:18.749 "name": "BaseBdev1", 00:09:18.749 "aliases": [ 00:09:18.749 "65b3b63d-e6eb-4b23-8d94-ec52c7823b54" 00:09:18.749 ], 00:09:18.749 "product_name": "Malloc disk", 00:09:18.749 "block_size": 512, 00:09:18.749 "num_blocks": 65536, 00:09:18.749 "uuid": "65b3b63d-e6eb-4b23-8d94-ec52c7823b54", 00:09:18.749 "assigned_rate_limits": { 00:09:18.749 "rw_ios_per_sec": 0, 00:09:18.749 "rw_mbytes_per_sec": 0, 00:09:18.749 "r_mbytes_per_sec": 0, 00:09:18.749 "w_mbytes_per_sec": 0 00:09:18.749 }, 00:09:18.749 "claimed": true, 00:09:18.749 "claim_type": "exclusive_write", 00:09:18.749 "zoned": false, 00:09:18.749 "supported_io_types": { 00:09:18.749 "read": true, 00:09:18.749 "write": true, 00:09:18.749 "unmap": true, 00:09:18.749 "flush": true, 00:09:18.749 "reset": true, 00:09:18.749 "nvme_admin": false, 00:09:18.749 "nvme_io": false, 00:09:18.749 "nvme_io_md": false, 00:09:18.749 "write_zeroes": true, 00:09:18.749 "zcopy": true, 00:09:18.749 "get_zone_info": false, 00:09:19.007 "zone_management": false, 00:09:19.007 "zone_append": false, 00:09:19.007 "compare": false, 00:09:19.007 "compare_and_write": false, 00:09:19.007 "abort": true, 00:09:19.007 "seek_hole": false, 00:09:19.007 "seek_data": false, 00:09:19.007 "copy": true, 00:09:19.007 "nvme_iov_md": false 00:09:19.007 }, 00:09:19.007 "memory_domains": [ 00:09:19.007 { 00:09:19.007 "dma_device_id": "system", 00:09:19.007 "dma_device_type": 1 00:09:19.007 }, 00:09:19.007 { 00:09:19.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.007 "dma_device_type": 2 00:09:19.007 } 00:09:19.007 ], 00:09:19.007 "driver_specific": {} 00:09:19.007 } 00:09:19.007 ] 00:09:19.007 16:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.007 16:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:19.007 16:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:19.007 16:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.007 16:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.007 16:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.007 16:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.007 16:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:19.007 16:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.007 16:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.007 16:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.007 16:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.007 16:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.007 16:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.007 16:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.007 16:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.007 16:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.007 16:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.007 "name": "Existed_Raid", 00:09:19.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.007 "strip_size_kb": 64, 00:09:19.007 "state": "configuring", 00:09:19.007 "raid_level": "concat", 00:09:19.007 "superblock": false, 00:09:19.007 "num_base_bdevs": 2, 00:09:19.007 "num_base_bdevs_discovered": 1, 00:09:19.007 "num_base_bdevs_operational": 2, 00:09:19.007 "base_bdevs_list": [ 00:09:19.007 { 00:09:19.007 "name": "BaseBdev1", 00:09:19.007 "uuid": "65b3b63d-e6eb-4b23-8d94-ec52c7823b54", 00:09:19.007 "is_configured": true, 00:09:19.007 "data_offset": 0, 00:09:19.007 "data_size": 65536 00:09:19.007 }, 00:09:19.007 { 00:09:19.007 "name": "BaseBdev2", 00:09:19.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.008 "is_configured": false, 00:09:19.008 "data_offset": 0, 00:09:19.008 "data_size": 0 00:09:19.008 } 00:09:19.008 ] 00:09:19.008 }' 00:09:19.008 16:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.008 16:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.266 16:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:19.266 16:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.266 16:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.266 [2024-11-05 16:23:32.253938] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:19.266 [2024-11-05 16:23:32.254000] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:19.266 16:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.266 16:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:19.266 16:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.266 16:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.266 [2024-11-05 16:23:32.265956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:19.266 [2024-11-05 16:23:32.267967] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:19.266 [2024-11-05 16:23:32.268014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:19.266 16:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.266 16:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:19.266 16:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:19.266 16:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:19.266 16:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.266 16:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.266 16:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.266 16:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.266 16:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:19.266 16:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.266 16:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.266 16:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.266 16:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.266 16:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.266 16:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.266 16:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.266 16:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.266 16:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.266 16:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.266 "name": "Existed_Raid", 00:09:19.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.266 "strip_size_kb": 64, 00:09:19.266 "state": "configuring", 00:09:19.266 "raid_level": "concat", 00:09:19.266 "superblock": false, 00:09:19.266 "num_base_bdevs": 2, 00:09:19.266 "num_base_bdevs_discovered": 1, 00:09:19.266 "num_base_bdevs_operational": 2, 00:09:19.266 "base_bdevs_list": [ 00:09:19.266 { 00:09:19.266 "name": "BaseBdev1", 00:09:19.266 "uuid": "65b3b63d-e6eb-4b23-8d94-ec52c7823b54", 00:09:19.266 "is_configured": true, 00:09:19.266 "data_offset": 0, 00:09:19.266 "data_size": 65536 00:09:19.266 }, 00:09:19.266 { 00:09:19.266 "name": "BaseBdev2", 00:09:19.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.266 "is_configured": false, 00:09:19.266 "data_offset": 0, 00:09:19.266 "data_size": 0 00:09:19.266 } 00:09:19.266 ] 00:09:19.266 }' 00:09:19.266 16:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.266 16:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.834 16:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:19.835 16:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.835 16:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.835 [2024-11-05 16:23:32.801494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:19.835 [2024-11-05 16:23:32.801703] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:19.835 [2024-11-05 16:23:32.801735] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:19.835 [2024-11-05 16:23:32.802158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:19.835 [2024-11-05 16:23:32.802394] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:19.835 [2024-11-05 16:23:32.802444] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:19.835 [2024-11-05 16:23:32.802758] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:19.835 BaseBdev2 00:09:19.835 16:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.835 16:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:19.835 16:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:19.835 16:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:19.835 16:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:19.835 16:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:19.835 16:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:19.835 16:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:19.835 16:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.835 16:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.835 16:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.835 16:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:19.835 16:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.835 16:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.835 [ 00:09:19.835 { 00:09:19.835 "name": "BaseBdev2", 00:09:19.835 "aliases": [ 00:09:19.835 "3a5af146-dab8-4471-8382-4a1223c4cc1b" 00:09:19.835 ], 00:09:19.835 "product_name": "Malloc disk", 00:09:19.835 "block_size": 512, 00:09:19.835 "num_blocks": 65536, 00:09:19.835 "uuid": "3a5af146-dab8-4471-8382-4a1223c4cc1b", 00:09:19.835 "assigned_rate_limits": { 00:09:19.835 "rw_ios_per_sec": 0, 00:09:19.835 "rw_mbytes_per_sec": 0, 00:09:19.835 "r_mbytes_per_sec": 0, 00:09:19.835 "w_mbytes_per_sec": 0 00:09:19.835 }, 00:09:19.835 "claimed": true, 00:09:19.835 "claim_type": "exclusive_write", 00:09:19.835 "zoned": false, 00:09:19.835 "supported_io_types": { 00:09:19.835 "read": true, 00:09:19.835 "write": true, 00:09:19.835 "unmap": true, 00:09:19.835 "flush": true, 00:09:19.835 "reset": true, 00:09:19.835 "nvme_admin": false, 00:09:19.835 "nvme_io": false, 00:09:19.835 "nvme_io_md": false, 00:09:19.835 "write_zeroes": true, 00:09:19.835 "zcopy": true, 00:09:19.835 "get_zone_info": false, 00:09:19.835 "zone_management": false, 00:09:19.835 "zone_append": false, 00:09:19.835 "compare": false, 00:09:19.835 "compare_and_write": false, 00:09:19.835 "abort": true, 00:09:19.835 "seek_hole": false, 00:09:19.835 "seek_data": false, 00:09:19.835 "copy": true, 00:09:19.835 "nvme_iov_md": false 00:09:19.835 }, 00:09:19.835 "memory_domains": [ 00:09:19.835 { 00:09:19.835 "dma_device_id": "system", 00:09:19.835 "dma_device_type": 1 00:09:19.835 }, 00:09:19.835 { 00:09:19.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.835 "dma_device_type": 2 00:09:19.835 } 00:09:19.835 ], 00:09:19.835 "driver_specific": {} 00:09:19.835 } 00:09:19.835 ] 00:09:19.835 16:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.835 16:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:19.835 16:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:19.835 16:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:19.835 16:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:09:19.835 16:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.835 16:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:19.835 16:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.835 16:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.835 16:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:19.835 16:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.835 16:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.835 16:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.835 16:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.835 16:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.835 16:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.835 16:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.835 16:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.835 16:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.835 16:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.835 "name": "Existed_Raid", 00:09:19.835 "uuid": "84a49fa3-ba62-4765-a6b8-9082f4b291db", 00:09:19.835 "strip_size_kb": 64, 00:09:19.835 "state": "online", 00:09:19.835 "raid_level": "concat", 00:09:19.835 "superblock": false, 00:09:19.835 "num_base_bdevs": 2, 00:09:19.835 "num_base_bdevs_discovered": 2, 00:09:19.835 "num_base_bdevs_operational": 2, 00:09:19.835 "base_bdevs_list": [ 00:09:19.835 { 00:09:19.835 "name": "BaseBdev1", 00:09:19.835 "uuid": "65b3b63d-e6eb-4b23-8d94-ec52c7823b54", 00:09:19.835 "is_configured": true, 00:09:19.835 "data_offset": 0, 00:09:19.835 "data_size": 65536 00:09:19.835 }, 00:09:19.835 { 00:09:19.835 "name": "BaseBdev2", 00:09:19.835 "uuid": "3a5af146-dab8-4471-8382-4a1223c4cc1b", 00:09:19.835 "is_configured": true, 00:09:19.835 "data_offset": 0, 00:09:19.835 "data_size": 65536 00:09:19.835 } 00:09:19.835 ] 00:09:19.835 }' 00:09:19.835 16:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.835 16:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.403 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:20.403 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:20.403 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:20.403 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:20.403 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:20.403 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:20.403 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:20.403 16:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.403 16:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.403 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:20.403 [2024-11-05 16:23:33.313001] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:20.403 16:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.403 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:20.403 "name": "Existed_Raid", 00:09:20.403 "aliases": [ 00:09:20.403 "84a49fa3-ba62-4765-a6b8-9082f4b291db" 00:09:20.403 ], 00:09:20.403 "product_name": "Raid Volume", 00:09:20.403 "block_size": 512, 00:09:20.403 "num_blocks": 131072, 00:09:20.403 "uuid": "84a49fa3-ba62-4765-a6b8-9082f4b291db", 00:09:20.403 "assigned_rate_limits": { 00:09:20.403 "rw_ios_per_sec": 0, 00:09:20.403 "rw_mbytes_per_sec": 0, 00:09:20.403 "r_mbytes_per_sec": 0, 00:09:20.403 "w_mbytes_per_sec": 0 00:09:20.403 }, 00:09:20.403 "claimed": false, 00:09:20.403 "zoned": false, 00:09:20.403 "supported_io_types": { 00:09:20.403 "read": true, 00:09:20.403 "write": true, 00:09:20.403 "unmap": true, 00:09:20.403 "flush": true, 00:09:20.403 "reset": true, 00:09:20.404 "nvme_admin": false, 00:09:20.404 "nvme_io": false, 00:09:20.404 "nvme_io_md": false, 00:09:20.404 "write_zeroes": true, 00:09:20.404 "zcopy": false, 00:09:20.404 "get_zone_info": false, 00:09:20.404 "zone_management": false, 00:09:20.404 "zone_append": false, 00:09:20.404 "compare": false, 00:09:20.404 "compare_and_write": false, 00:09:20.404 "abort": false, 00:09:20.404 "seek_hole": false, 00:09:20.404 "seek_data": false, 00:09:20.404 "copy": false, 00:09:20.404 "nvme_iov_md": false 00:09:20.404 }, 00:09:20.404 "memory_domains": [ 00:09:20.404 { 00:09:20.404 "dma_device_id": "system", 00:09:20.404 "dma_device_type": 1 00:09:20.404 }, 00:09:20.404 { 00:09:20.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.404 "dma_device_type": 2 00:09:20.404 }, 00:09:20.404 { 00:09:20.404 "dma_device_id": "system", 00:09:20.404 "dma_device_type": 1 00:09:20.404 }, 00:09:20.404 { 00:09:20.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.404 "dma_device_type": 2 00:09:20.404 } 00:09:20.404 ], 00:09:20.404 "driver_specific": { 00:09:20.404 "raid": { 00:09:20.404 "uuid": "84a49fa3-ba62-4765-a6b8-9082f4b291db", 00:09:20.404 "strip_size_kb": 64, 00:09:20.404 "state": "online", 00:09:20.404 "raid_level": "concat", 00:09:20.404 "superblock": false, 00:09:20.404 "num_base_bdevs": 2, 00:09:20.404 "num_base_bdevs_discovered": 2, 00:09:20.404 "num_base_bdevs_operational": 2, 00:09:20.404 "base_bdevs_list": [ 00:09:20.404 { 00:09:20.404 "name": "BaseBdev1", 00:09:20.404 "uuid": "65b3b63d-e6eb-4b23-8d94-ec52c7823b54", 00:09:20.404 "is_configured": true, 00:09:20.404 "data_offset": 0, 00:09:20.404 "data_size": 65536 00:09:20.404 }, 00:09:20.404 { 00:09:20.404 "name": "BaseBdev2", 00:09:20.404 "uuid": "3a5af146-dab8-4471-8382-4a1223c4cc1b", 00:09:20.404 "is_configured": true, 00:09:20.404 "data_offset": 0, 00:09:20.404 "data_size": 65536 00:09:20.404 } 00:09:20.404 ] 00:09:20.404 } 00:09:20.404 } 00:09:20.404 }' 00:09:20.404 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:20.404 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:20.404 BaseBdev2' 00:09:20.404 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.404 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:20.404 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.404 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:20.404 16:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.404 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.404 16:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.404 16:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.664 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.664 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.664 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.664 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:20.664 16:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.664 16:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.664 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.664 16:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.664 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.664 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.664 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:20.664 16:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.664 16:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.664 [2024-11-05 16:23:33.544407] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:20.664 [2024-11-05 16:23:33.544442] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:20.664 [2024-11-05 16:23:33.544506] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:20.664 16:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.664 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:20.664 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:20.664 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:20.664 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:20.664 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:20.664 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:09:20.664 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.664 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:20.664 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:20.664 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.664 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:20.664 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.664 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.664 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.664 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.664 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.664 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.664 16:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.664 16:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.664 16:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.664 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.664 "name": "Existed_Raid", 00:09:20.664 "uuid": "84a49fa3-ba62-4765-a6b8-9082f4b291db", 00:09:20.664 "strip_size_kb": 64, 00:09:20.664 "state": "offline", 00:09:20.664 "raid_level": "concat", 00:09:20.664 "superblock": false, 00:09:20.664 "num_base_bdevs": 2, 00:09:20.664 "num_base_bdevs_discovered": 1, 00:09:20.664 "num_base_bdevs_operational": 1, 00:09:20.664 "base_bdevs_list": [ 00:09:20.664 { 00:09:20.664 "name": null, 00:09:20.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.664 "is_configured": false, 00:09:20.664 "data_offset": 0, 00:09:20.664 "data_size": 65536 00:09:20.664 }, 00:09:20.664 { 00:09:20.664 "name": "BaseBdev2", 00:09:20.664 "uuid": "3a5af146-dab8-4471-8382-4a1223c4cc1b", 00:09:20.664 "is_configured": true, 00:09:20.664 "data_offset": 0, 00:09:20.664 "data_size": 65536 00:09:20.664 } 00:09:20.664 ] 00:09:20.664 }' 00:09:20.664 16:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.664 16:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.233 16:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:21.233 16:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:21.233 16:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.233 16:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.233 16:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.233 16:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:21.233 16:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.233 16:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:21.233 16:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:21.233 16:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:21.233 16:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.233 16:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.233 [2024-11-05 16:23:34.168405] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:21.233 [2024-11-05 16:23:34.168547] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:21.233 16:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.233 16:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:21.233 16:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:21.233 16:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.233 16:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.233 16:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.233 16:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:21.233 16:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.491 16:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:21.491 16:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:21.491 16:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:21.491 16:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61922 00:09:21.491 16:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 61922 ']' 00:09:21.491 16:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 61922 00:09:21.491 16:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:09:21.491 16:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:21.491 16:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61922 00:09:21.492 16:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:21.492 16:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:21.492 16:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61922' 00:09:21.492 killing process with pid 61922 00:09:21.492 16:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 61922 00:09:21.492 [2024-11-05 16:23:34.361539] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:21.492 16:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 61922 00:09:21.492 [2024-11-05 16:23:34.380557] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:22.939 16:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:22.939 00:09:22.939 real 0m5.253s 00:09:22.939 user 0m7.520s 00:09:22.939 sys 0m0.836s 00:09:22.939 16:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:22.939 16:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.939 ************************************ 00:09:22.939 END TEST raid_state_function_test 00:09:22.939 ************************************ 00:09:22.939 16:23:35 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:09:22.939 16:23:35 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:22.939 16:23:35 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:22.939 16:23:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:22.939 ************************************ 00:09:22.939 START TEST raid_state_function_test_sb 00:09:22.939 ************************************ 00:09:22.939 16:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 2 true 00:09:22.939 16:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:22.939 16:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:22.939 16:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:22.939 16:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:22.939 16:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:22.939 16:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:22.939 16:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:22.939 16:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:22.939 16:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:22.939 16:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:22.939 16:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:22.939 16:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:22.939 16:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:22.939 16:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:22.939 16:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:22.939 16:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:22.939 16:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:22.939 16:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:22.939 16:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:22.939 16:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:22.939 16:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:22.939 16:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:22.939 16:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:22.939 16:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62175 00:09:22.939 Process raid pid: 62175 00:09:22.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.939 16:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62175' 00:09:22.939 16:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62175 00:09:22.939 16:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:22.939 16:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 62175 ']' 00:09:22.939 16:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.939 16:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:22.939 16:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.939 16:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:22.939 16:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.939 [2024-11-05 16:23:35.777071] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:09:22.939 [2024-11-05 16:23:35.777271] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:22.939 [2024-11-05 16:23:35.933399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.199 [2024-11-05 16:23:36.053141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.199 [2024-11-05 16:23:36.267495] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:23.199 [2024-11-05 16:23:36.267554] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:23.771 16:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:23.771 16:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:09:23.771 16:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:23.771 16:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.771 16:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.771 [2024-11-05 16:23:36.675360] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:23.771 [2024-11-05 16:23:36.675506] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:23.771 [2024-11-05 16:23:36.675533] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:23.771 [2024-11-05 16:23:36.675545] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:23.771 16:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.771 16:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:23.771 16:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.771 16:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.771 16:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:23.771 16:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.771 16:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:23.771 16:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.771 16:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.771 16:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.771 16:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.771 16:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.771 16:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.771 16:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.771 16:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.771 16:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.771 16:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.771 "name": "Existed_Raid", 00:09:23.771 "uuid": "304e8f26-ee06-42e3-84b2-d102e42a0955", 00:09:23.771 "strip_size_kb": 64, 00:09:23.771 "state": "configuring", 00:09:23.771 "raid_level": "concat", 00:09:23.771 "superblock": true, 00:09:23.771 "num_base_bdevs": 2, 00:09:23.771 "num_base_bdevs_discovered": 0, 00:09:23.771 "num_base_bdevs_operational": 2, 00:09:23.771 "base_bdevs_list": [ 00:09:23.771 { 00:09:23.771 "name": "BaseBdev1", 00:09:23.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.771 "is_configured": false, 00:09:23.771 "data_offset": 0, 00:09:23.771 "data_size": 0 00:09:23.771 }, 00:09:23.771 { 00:09:23.771 "name": "BaseBdev2", 00:09:23.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.771 "is_configured": false, 00:09:23.771 "data_offset": 0, 00:09:23.771 "data_size": 0 00:09:23.771 } 00:09:23.771 ] 00:09:23.771 }' 00:09:23.771 16:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.771 16:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.031 16:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:24.031 16:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.031 16:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.031 [2024-11-05 16:23:37.098600] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:24.031 [2024-11-05 16:23:37.098709] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:24.031 16:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.031 16:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:24.031 16:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.031 16:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.031 [2024-11-05 16:23:37.110579] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:24.031 [2024-11-05 16:23:37.110667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:24.031 [2024-11-05 16:23:37.110700] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:24.031 [2024-11-05 16:23:37.110730] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:24.031 16:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.031 16:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:24.031 16:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.031 16:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.290 [2024-11-05 16:23:37.162719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:24.290 BaseBdev1 00:09:24.290 16:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.290 16:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:24.290 16:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:24.290 16:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:24.290 16:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:24.290 16:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:24.290 16:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:24.290 16:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:24.290 16:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.290 16:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.290 16:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.290 16:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:24.290 16:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.290 16:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.290 [ 00:09:24.290 { 00:09:24.290 "name": "BaseBdev1", 00:09:24.290 "aliases": [ 00:09:24.290 "70a042b9-0669-409c-8b14-93c9860d7109" 00:09:24.290 ], 00:09:24.290 "product_name": "Malloc disk", 00:09:24.290 "block_size": 512, 00:09:24.290 "num_blocks": 65536, 00:09:24.290 "uuid": "70a042b9-0669-409c-8b14-93c9860d7109", 00:09:24.290 "assigned_rate_limits": { 00:09:24.290 "rw_ios_per_sec": 0, 00:09:24.290 "rw_mbytes_per_sec": 0, 00:09:24.290 "r_mbytes_per_sec": 0, 00:09:24.290 "w_mbytes_per_sec": 0 00:09:24.290 }, 00:09:24.290 "claimed": true, 00:09:24.290 "claim_type": "exclusive_write", 00:09:24.290 "zoned": false, 00:09:24.290 "supported_io_types": { 00:09:24.290 "read": true, 00:09:24.290 "write": true, 00:09:24.290 "unmap": true, 00:09:24.290 "flush": true, 00:09:24.290 "reset": true, 00:09:24.290 "nvme_admin": false, 00:09:24.290 "nvme_io": false, 00:09:24.290 "nvme_io_md": false, 00:09:24.290 "write_zeroes": true, 00:09:24.290 "zcopy": true, 00:09:24.290 "get_zone_info": false, 00:09:24.290 "zone_management": false, 00:09:24.290 "zone_append": false, 00:09:24.290 "compare": false, 00:09:24.290 "compare_and_write": false, 00:09:24.290 "abort": true, 00:09:24.290 "seek_hole": false, 00:09:24.290 "seek_data": false, 00:09:24.290 "copy": true, 00:09:24.290 "nvme_iov_md": false 00:09:24.290 }, 00:09:24.290 "memory_domains": [ 00:09:24.290 { 00:09:24.290 "dma_device_id": "system", 00:09:24.290 "dma_device_type": 1 00:09:24.290 }, 00:09:24.290 { 00:09:24.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.290 "dma_device_type": 2 00:09:24.290 } 00:09:24.290 ], 00:09:24.290 "driver_specific": {} 00:09:24.290 } 00:09:24.290 ] 00:09:24.290 16:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.290 16:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:24.290 16:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:24.290 16:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.290 16:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.290 16:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:24.290 16:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.290 16:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:24.290 16:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.290 16:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.290 16:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.290 16:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.290 16:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.290 16:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.290 16:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.290 16:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.291 16:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.291 16:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.291 "name": "Existed_Raid", 00:09:24.291 "uuid": "e48c035a-f80a-4be3-8390-beae1858523c", 00:09:24.291 "strip_size_kb": 64, 00:09:24.291 "state": "configuring", 00:09:24.291 "raid_level": "concat", 00:09:24.291 "superblock": true, 00:09:24.291 "num_base_bdevs": 2, 00:09:24.291 "num_base_bdevs_discovered": 1, 00:09:24.291 "num_base_bdevs_operational": 2, 00:09:24.291 "base_bdevs_list": [ 00:09:24.291 { 00:09:24.291 "name": "BaseBdev1", 00:09:24.291 "uuid": "70a042b9-0669-409c-8b14-93c9860d7109", 00:09:24.291 "is_configured": true, 00:09:24.291 "data_offset": 2048, 00:09:24.291 "data_size": 63488 00:09:24.291 }, 00:09:24.291 { 00:09:24.291 "name": "BaseBdev2", 00:09:24.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.291 "is_configured": false, 00:09:24.291 "data_offset": 0, 00:09:24.291 "data_size": 0 00:09:24.291 } 00:09:24.291 ] 00:09:24.291 }' 00:09:24.291 16:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.291 16:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.550 16:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:24.550 16:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.550 16:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.550 [2024-11-05 16:23:37.602080] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:24.550 [2024-11-05 16:23:37.602158] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:24.550 16:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.550 16:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:24.550 16:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.550 16:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.550 [2024-11-05 16:23:37.614124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:24.550 [2024-11-05 16:23:37.616357] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:24.550 [2024-11-05 16:23:37.616444] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:24.550 16:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.550 16:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:24.550 16:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:24.550 16:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:24.550 16:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.550 16:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.550 16:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:24.550 16:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.550 16:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:24.550 16:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.550 16:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.550 16:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.550 16:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.550 16:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.550 16:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.550 16:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.550 16:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.809 16:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.809 16:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.809 "name": "Existed_Raid", 00:09:24.809 "uuid": "0009ec87-b122-4fed-9123-8421b5d250f9", 00:09:24.809 "strip_size_kb": 64, 00:09:24.809 "state": "configuring", 00:09:24.809 "raid_level": "concat", 00:09:24.809 "superblock": true, 00:09:24.809 "num_base_bdevs": 2, 00:09:24.809 "num_base_bdevs_discovered": 1, 00:09:24.809 "num_base_bdevs_operational": 2, 00:09:24.809 "base_bdevs_list": [ 00:09:24.809 { 00:09:24.809 "name": "BaseBdev1", 00:09:24.809 "uuid": "70a042b9-0669-409c-8b14-93c9860d7109", 00:09:24.809 "is_configured": true, 00:09:24.809 "data_offset": 2048, 00:09:24.809 "data_size": 63488 00:09:24.809 }, 00:09:24.809 { 00:09:24.809 "name": "BaseBdev2", 00:09:24.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.809 "is_configured": false, 00:09:24.809 "data_offset": 0, 00:09:24.809 "data_size": 0 00:09:24.809 } 00:09:24.809 ] 00:09:24.809 }' 00:09:24.809 16:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.809 16:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.068 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:25.068 16:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.068 16:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.068 [2024-11-05 16:23:38.155244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:25.068 [2024-11-05 16:23:38.155560] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:25.068 [2024-11-05 16:23:38.155580] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:25.068 [2024-11-05 16:23:38.155880] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:25.068 [2024-11-05 16:23:38.156048] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:25.068 [2024-11-05 16:23:38.156063] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:25.068 BaseBdev2 00:09:25.068 [2024-11-05 16:23:38.156222] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:25.328 16:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.328 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:25.328 16:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:25.328 16:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:25.328 16:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:25.328 16:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:25.328 16:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:25.328 16:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:25.328 16:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.328 16:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.328 16:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.328 16:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:25.328 16:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.328 16:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.328 [ 00:09:25.328 { 00:09:25.328 "name": "BaseBdev2", 00:09:25.328 "aliases": [ 00:09:25.328 "c85407f0-2ae1-4f71-b88f-4f4523ef60e9" 00:09:25.328 ], 00:09:25.328 "product_name": "Malloc disk", 00:09:25.328 "block_size": 512, 00:09:25.328 "num_blocks": 65536, 00:09:25.328 "uuid": "c85407f0-2ae1-4f71-b88f-4f4523ef60e9", 00:09:25.328 "assigned_rate_limits": { 00:09:25.328 "rw_ios_per_sec": 0, 00:09:25.328 "rw_mbytes_per_sec": 0, 00:09:25.328 "r_mbytes_per_sec": 0, 00:09:25.328 "w_mbytes_per_sec": 0 00:09:25.328 }, 00:09:25.328 "claimed": true, 00:09:25.328 "claim_type": "exclusive_write", 00:09:25.328 "zoned": false, 00:09:25.328 "supported_io_types": { 00:09:25.328 "read": true, 00:09:25.328 "write": true, 00:09:25.328 "unmap": true, 00:09:25.328 "flush": true, 00:09:25.328 "reset": true, 00:09:25.328 "nvme_admin": false, 00:09:25.328 "nvme_io": false, 00:09:25.328 "nvme_io_md": false, 00:09:25.328 "write_zeroes": true, 00:09:25.328 "zcopy": true, 00:09:25.328 "get_zone_info": false, 00:09:25.328 "zone_management": false, 00:09:25.328 "zone_append": false, 00:09:25.328 "compare": false, 00:09:25.328 "compare_and_write": false, 00:09:25.328 "abort": true, 00:09:25.328 "seek_hole": false, 00:09:25.328 "seek_data": false, 00:09:25.328 "copy": true, 00:09:25.328 "nvme_iov_md": false 00:09:25.328 }, 00:09:25.328 "memory_domains": [ 00:09:25.328 { 00:09:25.328 "dma_device_id": "system", 00:09:25.328 "dma_device_type": 1 00:09:25.328 }, 00:09:25.328 { 00:09:25.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.328 "dma_device_type": 2 00:09:25.328 } 00:09:25.328 ], 00:09:25.328 "driver_specific": {} 00:09:25.328 } 00:09:25.329 ] 00:09:25.329 16:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.329 16:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:25.329 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:25.329 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:25.329 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:09:25.329 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.329 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:25.329 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:25.329 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.329 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:25.329 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.329 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.329 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.329 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.329 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.329 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.329 16:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.329 16:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.329 16:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.329 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.329 "name": "Existed_Raid", 00:09:25.329 "uuid": "0009ec87-b122-4fed-9123-8421b5d250f9", 00:09:25.329 "strip_size_kb": 64, 00:09:25.329 "state": "online", 00:09:25.329 "raid_level": "concat", 00:09:25.329 "superblock": true, 00:09:25.329 "num_base_bdevs": 2, 00:09:25.329 "num_base_bdevs_discovered": 2, 00:09:25.329 "num_base_bdevs_operational": 2, 00:09:25.329 "base_bdevs_list": [ 00:09:25.329 { 00:09:25.329 "name": "BaseBdev1", 00:09:25.329 "uuid": "70a042b9-0669-409c-8b14-93c9860d7109", 00:09:25.329 "is_configured": true, 00:09:25.329 "data_offset": 2048, 00:09:25.329 "data_size": 63488 00:09:25.329 }, 00:09:25.329 { 00:09:25.329 "name": "BaseBdev2", 00:09:25.329 "uuid": "c85407f0-2ae1-4f71-b88f-4f4523ef60e9", 00:09:25.329 "is_configured": true, 00:09:25.329 "data_offset": 2048, 00:09:25.329 "data_size": 63488 00:09:25.329 } 00:09:25.329 ] 00:09:25.329 }' 00:09:25.329 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.329 16:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.588 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:25.588 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:25.588 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:25.588 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:25.588 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:25.588 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:25.588 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:25.588 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:25.588 16:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.588 16:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.588 [2024-11-05 16:23:38.634782] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:25.588 16:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.588 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:25.588 "name": "Existed_Raid", 00:09:25.588 "aliases": [ 00:09:25.588 "0009ec87-b122-4fed-9123-8421b5d250f9" 00:09:25.588 ], 00:09:25.588 "product_name": "Raid Volume", 00:09:25.588 "block_size": 512, 00:09:25.588 "num_blocks": 126976, 00:09:25.588 "uuid": "0009ec87-b122-4fed-9123-8421b5d250f9", 00:09:25.588 "assigned_rate_limits": { 00:09:25.588 "rw_ios_per_sec": 0, 00:09:25.588 "rw_mbytes_per_sec": 0, 00:09:25.588 "r_mbytes_per_sec": 0, 00:09:25.588 "w_mbytes_per_sec": 0 00:09:25.588 }, 00:09:25.588 "claimed": false, 00:09:25.588 "zoned": false, 00:09:25.588 "supported_io_types": { 00:09:25.588 "read": true, 00:09:25.588 "write": true, 00:09:25.588 "unmap": true, 00:09:25.588 "flush": true, 00:09:25.588 "reset": true, 00:09:25.588 "nvme_admin": false, 00:09:25.588 "nvme_io": false, 00:09:25.588 "nvme_io_md": false, 00:09:25.588 "write_zeroes": true, 00:09:25.588 "zcopy": false, 00:09:25.588 "get_zone_info": false, 00:09:25.588 "zone_management": false, 00:09:25.588 "zone_append": false, 00:09:25.588 "compare": false, 00:09:25.588 "compare_and_write": false, 00:09:25.588 "abort": false, 00:09:25.588 "seek_hole": false, 00:09:25.588 "seek_data": false, 00:09:25.588 "copy": false, 00:09:25.588 "nvme_iov_md": false 00:09:25.588 }, 00:09:25.588 "memory_domains": [ 00:09:25.588 { 00:09:25.588 "dma_device_id": "system", 00:09:25.588 "dma_device_type": 1 00:09:25.588 }, 00:09:25.588 { 00:09:25.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.588 "dma_device_type": 2 00:09:25.588 }, 00:09:25.588 { 00:09:25.588 "dma_device_id": "system", 00:09:25.588 "dma_device_type": 1 00:09:25.588 }, 00:09:25.588 { 00:09:25.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.588 "dma_device_type": 2 00:09:25.588 } 00:09:25.588 ], 00:09:25.588 "driver_specific": { 00:09:25.588 "raid": { 00:09:25.588 "uuid": "0009ec87-b122-4fed-9123-8421b5d250f9", 00:09:25.588 "strip_size_kb": 64, 00:09:25.588 "state": "online", 00:09:25.588 "raid_level": "concat", 00:09:25.588 "superblock": true, 00:09:25.588 "num_base_bdevs": 2, 00:09:25.588 "num_base_bdevs_discovered": 2, 00:09:25.588 "num_base_bdevs_operational": 2, 00:09:25.588 "base_bdevs_list": [ 00:09:25.588 { 00:09:25.588 "name": "BaseBdev1", 00:09:25.588 "uuid": "70a042b9-0669-409c-8b14-93c9860d7109", 00:09:25.588 "is_configured": true, 00:09:25.588 "data_offset": 2048, 00:09:25.588 "data_size": 63488 00:09:25.588 }, 00:09:25.588 { 00:09:25.588 "name": "BaseBdev2", 00:09:25.588 "uuid": "c85407f0-2ae1-4f71-b88f-4f4523ef60e9", 00:09:25.588 "is_configured": true, 00:09:25.588 "data_offset": 2048, 00:09:25.588 "data_size": 63488 00:09:25.588 } 00:09:25.588 ] 00:09:25.588 } 00:09:25.588 } 00:09:25.588 }' 00:09:25.588 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:25.847 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:25.847 BaseBdev2' 00:09:25.847 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.847 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:25.847 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.847 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.847 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:25.847 16:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.847 16:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.847 16:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.847 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.847 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.847 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.847 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.847 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:25.847 16:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.847 16:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.847 16:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.847 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.847 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.847 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:25.847 16:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.847 16:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.847 [2024-11-05 16:23:38.858146] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:25.847 [2024-11-05 16:23:38.858180] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:25.847 [2024-11-05 16:23:38.858230] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:26.108 16:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.108 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:26.108 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:26.108 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:26.108 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:26.108 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:26.108 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:09:26.108 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.108 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:26.108 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:26.108 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.108 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:26.109 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.109 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.109 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.109 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.109 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.109 16:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.109 16:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.109 16:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.109 16:23:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.109 16:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.109 "name": "Existed_Raid", 00:09:26.109 "uuid": "0009ec87-b122-4fed-9123-8421b5d250f9", 00:09:26.109 "strip_size_kb": 64, 00:09:26.109 "state": "offline", 00:09:26.109 "raid_level": "concat", 00:09:26.109 "superblock": true, 00:09:26.109 "num_base_bdevs": 2, 00:09:26.109 "num_base_bdevs_discovered": 1, 00:09:26.109 "num_base_bdevs_operational": 1, 00:09:26.109 "base_bdevs_list": [ 00:09:26.109 { 00:09:26.109 "name": null, 00:09:26.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.109 "is_configured": false, 00:09:26.109 "data_offset": 0, 00:09:26.109 "data_size": 63488 00:09:26.109 }, 00:09:26.109 { 00:09:26.109 "name": "BaseBdev2", 00:09:26.109 "uuid": "c85407f0-2ae1-4f71-b88f-4f4523ef60e9", 00:09:26.109 "is_configured": true, 00:09:26.109 "data_offset": 2048, 00:09:26.109 "data_size": 63488 00:09:26.109 } 00:09:26.109 ] 00:09:26.109 }' 00:09:26.109 16:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.109 16:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.677 16:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:26.677 16:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:26.677 16:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.677 16:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.677 16:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.677 16:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:26.677 16:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.677 16:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:26.677 16:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:26.677 16:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:26.677 16:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.677 16:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.677 [2024-11-05 16:23:39.532044] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:26.677 [2024-11-05 16:23:39.532106] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:26.677 16:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.677 16:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:26.677 16:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:26.677 16:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.677 16:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:26.677 16:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.677 16:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.677 16:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.677 16:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:26.677 16:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:26.677 16:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:26.677 16:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62175 00:09:26.677 16:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 62175 ']' 00:09:26.677 16:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 62175 00:09:26.677 16:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:09:26.677 16:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:26.677 16:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62175 00:09:26.677 killing process with pid 62175 00:09:26.677 16:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:26.677 16:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:26.677 16:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62175' 00:09:26.677 16:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 62175 00:09:26.677 [2024-11-05 16:23:39.740097] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:26.677 16:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 62175 00:09:26.677 [2024-11-05 16:23:39.760184] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:28.055 16:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:28.055 00:09:28.055 real 0m5.266s 00:09:28.055 user 0m7.632s 00:09:28.055 sys 0m0.821s 00:09:28.055 ************************************ 00:09:28.055 END TEST raid_state_function_test_sb 00:09:28.055 ************************************ 00:09:28.055 16:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:28.055 16:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.056 16:23:40 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:09:28.056 16:23:40 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:28.056 16:23:40 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:28.056 16:23:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:28.056 ************************************ 00:09:28.056 START TEST raid_superblock_test 00:09:28.056 ************************************ 00:09:28.056 16:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 2 00:09:28.056 16:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:28.056 16:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:28.056 16:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:28.056 16:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:28.056 16:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:28.056 16:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:28.056 16:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:28.056 16:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:28.056 16:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:28.056 16:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:28.056 16:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:28.056 16:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:28.056 16:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:28.056 16:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:28.056 16:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:28.056 16:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:28.056 16:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62427 00:09:28.056 16:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:28.056 16:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62427 00:09:28.056 16:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 62427 ']' 00:09:28.056 16:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.056 16:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:28.056 16:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.056 16:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:28.056 16:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.056 [2024-11-05 16:23:41.108890] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:09:28.056 [2024-11-05 16:23:41.109149] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62427 ] 00:09:28.315 [2024-11-05 16:23:41.272371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.315 [2024-11-05 16:23:41.390238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.574 [2024-11-05 16:23:41.596115] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:28.574 [2024-11-05 16:23:41.596214] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:29.143 16:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:29.143 16:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:09:29.143 16:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:29.143 16:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:29.143 16:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:29.143 16:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:29.143 16:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:29.143 16:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:29.143 16:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:29.143 16:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:29.143 16:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:29.143 16:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.143 16:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.143 malloc1 00:09:29.143 16:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.143 16:23:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:29.143 16:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.143 16:23:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.143 [2024-11-05 16:23:42.000448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:29.143 [2024-11-05 16:23:42.000625] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:29.143 [2024-11-05 16:23:42.000689] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:29.143 [2024-11-05 16:23:42.000743] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:29.143 [2024-11-05 16:23:42.003126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:29.143 [2024-11-05 16:23:42.003208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:29.143 pt1 00:09:29.143 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.143 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:29.143 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:29.143 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:29.143 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:29.143 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:29.143 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:29.143 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:29.143 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:29.143 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:29.143 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.143 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.143 malloc2 00:09:29.143 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.143 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:29.143 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.143 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.143 [2024-11-05 16:23:42.061553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:29.143 [2024-11-05 16:23:42.061611] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:29.143 [2024-11-05 16:23:42.061635] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:29.143 [2024-11-05 16:23:42.061645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:29.143 [2024-11-05 16:23:42.063810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:29.143 [2024-11-05 16:23:42.063851] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:29.143 pt2 00:09:29.143 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.143 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:29.143 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:29.143 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:29.143 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.143 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.143 [2024-11-05 16:23:42.073608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:29.143 [2024-11-05 16:23:42.075631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:29.143 [2024-11-05 16:23:42.075814] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:29.143 [2024-11-05 16:23:42.075829] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:29.143 [2024-11-05 16:23:42.076105] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:29.143 [2024-11-05 16:23:42.076275] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:29.143 [2024-11-05 16:23:42.076288] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:29.143 [2024-11-05 16:23:42.076458] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:29.143 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.143 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:29.143 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:29.143 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:29.143 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:29.143 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.143 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:29.143 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.143 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.143 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.143 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.143 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.143 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:29.143 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.143 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.143 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.143 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.143 "name": "raid_bdev1", 00:09:29.144 "uuid": "f8bfeaa4-e918-4f32-b72c-c7e25d39f02e", 00:09:29.144 "strip_size_kb": 64, 00:09:29.144 "state": "online", 00:09:29.144 "raid_level": "concat", 00:09:29.144 "superblock": true, 00:09:29.144 "num_base_bdevs": 2, 00:09:29.144 "num_base_bdevs_discovered": 2, 00:09:29.144 "num_base_bdevs_operational": 2, 00:09:29.144 "base_bdevs_list": [ 00:09:29.144 { 00:09:29.144 "name": "pt1", 00:09:29.144 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:29.144 "is_configured": true, 00:09:29.144 "data_offset": 2048, 00:09:29.144 "data_size": 63488 00:09:29.144 }, 00:09:29.144 { 00:09:29.144 "name": "pt2", 00:09:29.144 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:29.144 "is_configured": true, 00:09:29.144 "data_offset": 2048, 00:09:29.144 "data_size": 63488 00:09:29.144 } 00:09:29.144 ] 00:09:29.144 }' 00:09:29.144 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.144 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.714 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:29.714 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:29.714 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:29.714 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:29.714 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:29.714 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:29.714 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:29.714 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:29.714 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.714 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.714 [2024-11-05 16:23:42.517124] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:29.714 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.714 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:29.715 "name": "raid_bdev1", 00:09:29.715 "aliases": [ 00:09:29.715 "f8bfeaa4-e918-4f32-b72c-c7e25d39f02e" 00:09:29.715 ], 00:09:29.715 "product_name": "Raid Volume", 00:09:29.715 "block_size": 512, 00:09:29.715 "num_blocks": 126976, 00:09:29.715 "uuid": "f8bfeaa4-e918-4f32-b72c-c7e25d39f02e", 00:09:29.715 "assigned_rate_limits": { 00:09:29.715 "rw_ios_per_sec": 0, 00:09:29.715 "rw_mbytes_per_sec": 0, 00:09:29.715 "r_mbytes_per_sec": 0, 00:09:29.715 "w_mbytes_per_sec": 0 00:09:29.715 }, 00:09:29.715 "claimed": false, 00:09:29.715 "zoned": false, 00:09:29.715 "supported_io_types": { 00:09:29.715 "read": true, 00:09:29.715 "write": true, 00:09:29.715 "unmap": true, 00:09:29.715 "flush": true, 00:09:29.715 "reset": true, 00:09:29.715 "nvme_admin": false, 00:09:29.715 "nvme_io": false, 00:09:29.715 "nvme_io_md": false, 00:09:29.715 "write_zeroes": true, 00:09:29.715 "zcopy": false, 00:09:29.715 "get_zone_info": false, 00:09:29.715 "zone_management": false, 00:09:29.715 "zone_append": false, 00:09:29.715 "compare": false, 00:09:29.715 "compare_and_write": false, 00:09:29.715 "abort": false, 00:09:29.715 "seek_hole": false, 00:09:29.715 "seek_data": false, 00:09:29.715 "copy": false, 00:09:29.715 "nvme_iov_md": false 00:09:29.715 }, 00:09:29.715 "memory_domains": [ 00:09:29.715 { 00:09:29.715 "dma_device_id": "system", 00:09:29.715 "dma_device_type": 1 00:09:29.715 }, 00:09:29.715 { 00:09:29.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.715 "dma_device_type": 2 00:09:29.715 }, 00:09:29.715 { 00:09:29.715 "dma_device_id": "system", 00:09:29.715 "dma_device_type": 1 00:09:29.715 }, 00:09:29.715 { 00:09:29.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.715 "dma_device_type": 2 00:09:29.715 } 00:09:29.715 ], 00:09:29.715 "driver_specific": { 00:09:29.715 "raid": { 00:09:29.715 "uuid": "f8bfeaa4-e918-4f32-b72c-c7e25d39f02e", 00:09:29.715 "strip_size_kb": 64, 00:09:29.715 "state": "online", 00:09:29.715 "raid_level": "concat", 00:09:29.715 "superblock": true, 00:09:29.715 "num_base_bdevs": 2, 00:09:29.715 "num_base_bdevs_discovered": 2, 00:09:29.715 "num_base_bdevs_operational": 2, 00:09:29.715 "base_bdevs_list": [ 00:09:29.715 { 00:09:29.715 "name": "pt1", 00:09:29.715 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:29.715 "is_configured": true, 00:09:29.715 "data_offset": 2048, 00:09:29.715 "data_size": 63488 00:09:29.715 }, 00:09:29.715 { 00:09:29.715 "name": "pt2", 00:09:29.715 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:29.715 "is_configured": true, 00:09:29.715 "data_offset": 2048, 00:09:29.715 "data_size": 63488 00:09:29.715 } 00:09:29.715 ] 00:09:29.715 } 00:09:29.715 } 00:09:29.715 }' 00:09:29.715 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:29.715 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:29.715 pt2' 00:09:29.715 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.715 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:29.715 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.715 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:29.715 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.715 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.715 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.715 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.715 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.715 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.715 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.715 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.715 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:29.715 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.715 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.715 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.715 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.715 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.715 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:29.715 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.715 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.715 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:29.715 [2024-11-05 16:23:42.720873] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:29.715 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.715 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f8bfeaa4-e918-4f32-b72c-c7e25d39f02e 00:09:29.715 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f8bfeaa4-e918-4f32-b72c-c7e25d39f02e ']' 00:09:29.715 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:29.715 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.715 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.715 [2024-11-05 16:23:42.772411] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:29.715 [2024-11-05 16:23:42.772439] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:29.715 [2024-11-05 16:23:42.772572] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:29.715 [2024-11-05 16:23:42.772625] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:29.715 [2024-11-05 16:23:42.772637] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:29.715 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.715 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.715 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:29.715 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.715 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.715 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.975 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:29.975 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:29.975 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:29.975 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:29.975 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.975 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.975 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.975 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:29.975 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:29.975 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.975 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.975 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.975 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:29.975 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:29.975 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.975 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.975 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.975 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:29.975 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:29.975 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:29.975 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:29.975 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:29.975 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:29.975 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:29.975 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:29.975 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:29.975 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.975 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.975 [2024-11-05 16:23:42.900267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:29.975 [2024-11-05 16:23:42.902406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:29.975 [2024-11-05 16:23:42.902553] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:29.975 [2024-11-05 16:23:42.902687] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:29.975 [2024-11-05 16:23:42.902778] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:29.975 [2024-11-05 16:23:42.902833] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:29.975 request: 00:09:29.975 { 00:09:29.975 "name": "raid_bdev1", 00:09:29.975 "raid_level": "concat", 00:09:29.975 "base_bdevs": [ 00:09:29.975 "malloc1", 00:09:29.975 "malloc2" 00:09:29.975 ], 00:09:29.975 "strip_size_kb": 64, 00:09:29.975 "superblock": false, 00:09:29.975 "method": "bdev_raid_create", 00:09:29.975 "req_id": 1 00:09:29.975 } 00:09:29.975 Got JSON-RPC error response 00:09:29.975 response: 00:09:29.975 { 00:09:29.975 "code": -17, 00:09:29.975 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:29.975 } 00:09:29.975 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:29.975 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:29.975 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:29.975 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:29.975 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:29.975 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.975 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:29.975 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.975 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.976 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.976 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:29.976 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:29.976 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:29.976 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.976 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.976 [2024-11-05 16:23:42.964139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:29.976 [2024-11-05 16:23:42.964221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:29.976 [2024-11-05 16:23:42.964244] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:29.976 [2024-11-05 16:23:42.964256] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:29.976 [2024-11-05 16:23:42.966670] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:29.976 [2024-11-05 16:23:42.966713] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:29.976 [2024-11-05 16:23:42.966813] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:29.976 [2024-11-05 16:23:42.966885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:29.976 pt1 00:09:29.976 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.976 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:09:29.976 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:29.976 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.976 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:29.976 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.976 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:29.976 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.976 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.976 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.976 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.976 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:29.976 16:23:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.976 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.976 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.976 16:23:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.976 16:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.976 "name": "raid_bdev1", 00:09:29.976 "uuid": "f8bfeaa4-e918-4f32-b72c-c7e25d39f02e", 00:09:29.976 "strip_size_kb": 64, 00:09:29.976 "state": "configuring", 00:09:29.976 "raid_level": "concat", 00:09:29.976 "superblock": true, 00:09:29.976 "num_base_bdevs": 2, 00:09:29.976 "num_base_bdevs_discovered": 1, 00:09:29.976 "num_base_bdevs_operational": 2, 00:09:29.976 "base_bdevs_list": [ 00:09:29.976 { 00:09:29.976 "name": "pt1", 00:09:29.976 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:29.976 "is_configured": true, 00:09:29.976 "data_offset": 2048, 00:09:29.976 "data_size": 63488 00:09:29.976 }, 00:09:29.976 { 00:09:29.976 "name": null, 00:09:29.976 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:29.976 "is_configured": false, 00:09:29.976 "data_offset": 2048, 00:09:29.976 "data_size": 63488 00:09:29.976 } 00:09:29.976 ] 00:09:29.976 }' 00:09:29.976 16:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.976 16:23:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.542 16:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:30.542 16:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:30.542 16:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:30.542 16:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:30.542 16:23:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.542 16:23:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.542 [2024-11-05 16:23:43.431334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:30.542 [2024-11-05 16:23:43.431462] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.542 [2024-11-05 16:23:43.431504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:30.542 [2024-11-05 16:23:43.431552] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.542 [2024-11-05 16:23:43.432095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.542 [2024-11-05 16:23:43.432167] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:30.542 [2024-11-05 16:23:43.432286] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:30.542 [2024-11-05 16:23:43.432341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:30.542 [2024-11-05 16:23:43.432508] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:30.542 [2024-11-05 16:23:43.432587] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:30.542 [2024-11-05 16:23:43.432870] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:30.542 [2024-11-05 16:23:43.433056] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:30.542 [2024-11-05 16:23:43.433099] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:30.542 [2024-11-05 16:23:43.433288] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:30.542 pt2 00:09:30.542 16:23:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.542 16:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:30.542 16:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:30.542 16:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:30.542 16:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:30.542 16:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:30.542 16:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.542 16:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.542 16:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:30.542 16:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.542 16:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.542 16:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.542 16:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.542 16:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:30.542 16:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.542 16:23:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.542 16:23:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.542 16:23:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.542 16:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.542 "name": "raid_bdev1", 00:09:30.542 "uuid": "f8bfeaa4-e918-4f32-b72c-c7e25d39f02e", 00:09:30.542 "strip_size_kb": 64, 00:09:30.542 "state": "online", 00:09:30.542 "raid_level": "concat", 00:09:30.542 "superblock": true, 00:09:30.542 "num_base_bdevs": 2, 00:09:30.542 "num_base_bdevs_discovered": 2, 00:09:30.542 "num_base_bdevs_operational": 2, 00:09:30.542 "base_bdevs_list": [ 00:09:30.542 { 00:09:30.542 "name": "pt1", 00:09:30.542 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:30.542 "is_configured": true, 00:09:30.542 "data_offset": 2048, 00:09:30.542 "data_size": 63488 00:09:30.542 }, 00:09:30.542 { 00:09:30.542 "name": "pt2", 00:09:30.542 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:30.542 "is_configured": true, 00:09:30.542 "data_offset": 2048, 00:09:30.542 "data_size": 63488 00:09:30.542 } 00:09:30.542 ] 00:09:30.542 }' 00:09:30.542 16:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.542 16:23:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.802 16:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:30.802 16:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:30.802 16:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:30.802 16:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:30.802 16:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:30.802 16:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:30.802 16:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:30.802 16:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:30.802 16:23:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.802 16:23:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.802 [2024-11-05 16:23:43.866834] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:30.802 16:23:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.802 16:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:30.802 "name": "raid_bdev1", 00:09:30.802 "aliases": [ 00:09:30.802 "f8bfeaa4-e918-4f32-b72c-c7e25d39f02e" 00:09:30.802 ], 00:09:30.802 "product_name": "Raid Volume", 00:09:30.802 "block_size": 512, 00:09:30.802 "num_blocks": 126976, 00:09:30.802 "uuid": "f8bfeaa4-e918-4f32-b72c-c7e25d39f02e", 00:09:30.802 "assigned_rate_limits": { 00:09:30.802 "rw_ios_per_sec": 0, 00:09:30.802 "rw_mbytes_per_sec": 0, 00:09:30.802 "r_mbytes_per_sec": 0, 00:09:30.802 "w_mbytes_per_sec": 0 00:09:30.802 }, 00:09:30.802 "claimed": false, 00:09:30.802 "zoned": false, 00:09:30.802 "supported_io_types": { 00:09:30.802 "read": true, 00:09:30.802 "write": true, 00:09:30.802 "unmap": true, 00:09:30.802 "flush": true, 00:09:30.802 "reset": true, 00:09:30.802 "nvme_admin": false, 00:09:30.802 "nvme_io": false, 00:09:30.802 "nvme_io_md": false, 00:09:30.802 "write_zeroes": true, 00:09:30.802 "zcopy": false, 00:09:30.802 "get_zone_info": false, 00:09:30.802 "zone_management": false, 00:09:30.802 "zone_append": false, 00:09:30.802 "compare": false, 00:09:30.802 "compare_and_write": false, 00:09:30.802 "abort": false, 00:09:30.802 "seek_hole": false, 00:09:30.802 "seek_data": false, 00:09:30.802 "copy": false, 00:09:30.802 "nvme_iov_md": false 00:09:30.802 }, 00:09:30.802 "memory_domains": [ 00:09:30.802 { 00:09:30.802 "dma_device_id": "system", 00:09:30.802 "dma_device_type": 1 00:09:30.802 }, 00:09:30.802 { 00:09:30.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.802 "dma_device_type": 2 00:09:30.802 }, 00:09:30.802 { 00:09:30.802 "dma_device_id": "system", 00:09:30.802 "dma_device_type": 1 00:09:30.802 }, 00:09:30.802 { 00:09:30.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.802 "dma_device_type": 2 00:09:30.802 } 00:09:30.802 ], 00:09:30.802 "driver_specific": { 00:09:30.802 "raid": { 00:09:30.802 "uuid": "f8bfeaa4-e918-4f32-b72c-c7e25d39f02e", 00:09:30.802 "strip_size_kb": 64, 00:09:30.802 "state": "online", 00:09:30.802 "raid_level": "concat", 00:09:30.802 "superblock": true, 00:09:30.802 "num_base_bdevs": 2, 00:09:30.802 "num_base_bdevs_discovered": 2, 00:09:30.802 "num_base_bdevs_operational": 2, 00:09:30.802 "base_bdevs_list": [ 00:09:30.802 { 00:09:30.802 "name": "pt1", 00:09:30.802 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:30.802 "is_configured": true, 00:09:30.802 "data_offset": 2048, 00:09:30.802 "data_size": 63488 00:09:30.802 }, 00:09:30.802 { 00:09:30.802 "name": "pt2", 00:09:30.802 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:30.802 "is_configured": true, 00:09:30.802 "data_offset": 2048, 00:09:30.802 "data_size": 63488 00:09:30.802 } 00:09:30.802 ] 00:09:30.802 } 00:09:30.803 } 00:09:30.803 }' 00:09:31.061 16:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:31.061 16:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:31.061 pt2' 00:09:31.061 16:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.061 16:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:31.061 16:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.061 16:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:31.061 16:23:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.061 16:23:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.061 16:23:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.061 16:23:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.061 16:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.061 16:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.061 16:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.061 16:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:31.061 16:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.061 16:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.061 16:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.062 16:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.062 16:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.062 16:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.062 16:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:31.062 16:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.062 16:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.062 16:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:31.062 [2024-11-05 16:23:44.078401] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:31.062 16:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.062 16:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f8bfeaa4-e918-4f32-b72c-c7e25d39f02e '!=' f8bfeaa4-e918-4f32-b72c-c7e25d39f02e ']' 00:09:31.062 16:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:31.062 16:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:31.062 16:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:31.062 16:23:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62427 00:09:31.062 16:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 62427 ']' 00:09:31.062 16:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 62427 00:09:31.062 16:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:09:31.062 16:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:31.062 16:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62427 00:09:31.321 killing process with pid 62427 00:09:31.321 16:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:31.321 16:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:31.321 16:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62427' 00:09:31.321 16:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 62427 00:09:31.321 [2024-11-05 16:23:44.153729] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:31.321 [2024-11-05 16:23:44.153825] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:31.321 [2024-11-05 16:23:44.153878] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:31.321 [2024-11-05 16:23:44.153889] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:31.321 16:23:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 62427 00:09:31.321 [2024-11-05 16:23:44.362485] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:32.700 16:23:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:32.700 00:09:32.700 real 0m4.466s 00:09:32.700 user 0m6.232s 00:09:32.700 sys 0m0.777s 00:09:32.700 16:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:32.700 16:23:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.700 ************************************ 00:09:32.700 END TEST raid_superblock_test 00:09:32.700 ************************************ 00:09:32.700 16:23:45 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:09:32.700 16:23:45 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:32.700 16:23:45 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:32.700 16:23:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:32.700 ************************************ 00:09:32.700 START TEST raid_read_error_test 00:09:32.700 ************************************ 00:09:32.700 16:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 2 read 00:09:32.700 16:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:32.700 16:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:32.700 16:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:32.700 16:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:32.700 16:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:32.700 16:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:32.700 16:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:32.700 16:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:32.700 16:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:32.700 16:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:32.700 16:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:32.700 16:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:32.700 16:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:32.700 16:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:32.700 16:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:32.700 16:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:32.700 16:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:32.700 16:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:32.700 16:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:32.700 16:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:32.700 16:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:32.700 16:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:32.700 16:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.DqaLY5XgpJ 00:09:32.700 16:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62633 00:09:32.700 16:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:32.700 16:23:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62633 00:09:32.700 16:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 62633 ']' 00:09:32.700 16:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.700 16:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:32.700 16:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.700 16:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:32.700 16:23:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.700 [2024-11-05 16:23:45.653432] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:09:32.700 [2024-11-05 16:23:45.653590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62633 ] 00:09:32.959 [2024-11-05 16:23:45.827993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.959 [2024-11-05 16:23:45.945989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.218 [2024-11-05 16:23:46.149275] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.218 [2024-11-05 16:23:46.149343] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.478 16:23:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:33.478 16:23:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:33.478 16:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:33.478 16:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:33.478 16:23:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.478 16:23:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.478 BaseBdev1_malloc 00:09:33.478 16:23:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.478 16:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:33.478 16:23:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.478 16:23:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.478 true 00:09:33.478 16:23:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.478 16:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:33.478 16:23:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.478 16:23:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.478 [2024-11-05 16:23:46.552457] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:33.478 [2024-11-05 16:23:46.552536] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.478 [2024-11-05 16:23:46.552557] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:33.478 [2024-11-05 16:23:46.552568] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.478 [2024-11-05 16:23:46.554603] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.478 [2024-11-05 16:23:46.554642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:33.478 BaseBdev1 00:09:33.478 16:23:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.478 16:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:33.478 16:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:33.478 16:23:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.478 16:23:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.739 BaseBdev2_malloc 00:09:33.739 16:23:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.739 16:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:33.739 16:23:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.739 16:23:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.739 true 00:09:33.739 16:23:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.739 16:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:33.739 16:23:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.739 16:23:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.739 [2024-11-05 16:23:46.622459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:33.739 [2024-11-05 16:23:46.622593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.739 [2024-11-05 16:23:46.622618] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:33.739 [2024-11-05 16:23:46.622629] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.739 [2024-11-05 16:23:46.624852] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.739 [2024-11-05 16:23:46.624895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:33.739 BaseBdev2 00:09:33.739 16:23:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.739 16:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:33.739 16:23:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.739 16:23:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.739 [2024-11-05 16:23:46.634491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:33.739 [2024-11-05 16:23:46.636293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:33.739 [2024-11-05 16:23:46.636495] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:33.739 [2024-11-05 16:23:46.636527] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:33.739 [2024-11-05 16:23:46.636774] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:33.739 [2024-11-05 16:23:46.636956] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:33.739 [2024-11-05 16:23:46.636969] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:33.739 [2024-11-05 16:23:46.637138] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:33.739 16:23:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.739 16:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:33.739 16:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:33.739 16:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:33.739 16:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.739 16:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.739 16:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:33.739 16:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.739 16:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.739 16:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.739 16:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.739 16:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.739 16:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:33.739 16:23:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.739 16:23:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.739 16:23:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.739 16:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.739 "name": "raid_bdev1", 00:09:33.739 "uuid": "a0770b1b-59e7-4e07-abfe-e5d373f2f3d4", 00:09:33.739 "strip_size_kb": 64, 00:09:33.739 "state": "online", 00:09:33.739 "raid_level": "concat", 00:09:33.739 "superblock": true, 00:09:33.739 "num_base_bdevs": 2, 00:09:33.739 "num_base_bdevs_discovered": 2, 00:09:33.739 "num_base_bdevs_operational": 2, 00:09:33.739 "base_bdevs_list": [ 00:09:33.739 { 00:09:33.739 "name": "BaseBdev1", 00:09:33.739 "uuid": "7296357d-bbf5-59e0-8ba5-47cb26328215", 00:09:33.739 "is_configured": true, 00:09:33.739 "data_offset": 2048, 00:09:33.739 "data_size": 63488 00:09:33.739 }, 00:09:33.739 { 00:09:33.739 "name": "BaseBdev2", 00:09:33.739 "uuid": "e50ac3d5-b48a-540c-9f1f-d325ba210946", 00:09:33.739 "is_configured": true, 00:09:33.739 "data_offset": 2048, 00:09:33.739 "data_size": 63488 00:09:33.739 } 00:09:33.739 ] 00:09:33.739 }' 00:09:33.739 16:23:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.739 16:23:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.999 16:23:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:33.999 16:23:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:34.265 [2024-11-05 16:23:47.127090] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:35.217 16:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:35.217 16:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.217 16:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.217 16:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.217 16:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:35.217 16:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:35.217 16:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:35.217 16:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:35.217 16:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:35.217 16:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:35.218 16:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.218 16:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.218 16:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:35.218 16:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.218 16:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.218 16:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.218 16:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.218 16:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.218 16:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:35.218 16:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.218 16:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.218 16:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.218 16:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.218 "name": "raid_bdev1", 00:09:35.218 "uuid": "a0770b1b-59e7-4e07-abfe-e5d373f2f3d4", 00:09:35.218 "strip_size_kb": 64, 00:09:35.218 "state": "online", 00:09:35.218 "raid_level": "concat", 00:09:35.218 "superblock": true, 00:09:35.218 "num_base_bdevs": 2, 00:09:35.218 "num_base_bdevs_discovered": 2, 00:09:35.218 "num_base_bdevs_operational": 2, 00:09:35.218 "base_bdevs_list": [ 00:09:35.218 { 00:09:35.218 "name": "BaseBdev1", 00:09:35.218 "uuid": "7296357d-bbf5-59e0-8ba5-47cb26328215", 00:09:35.218 "is_configured": true, 00:09:35.218 "data_offset": 2048, 00:09:35.218 "data_size": 63488 00:09:35.218 }, 00:09:35.218 { 00:09:35.218 "name": "BaseBdev2", 00:09:35.218 "uuid": "e50ac3d5-b48a-540c-9f1f-d325ba210946", 00:09:35.218 "is_configured": true, 00:09:35.218 "data_offset": 2048, 00:09:35.218 "data_size": 63488 00:09:35.218 } 00:09:35.218 ] 00:09:35.218 }' 00:09:35.218 16:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.218 16:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.477 16:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:35.477 16:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.477 16:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.478 [2024-11-05 16:23:48.560277] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:35.478 [2024-11-05 16:23:48.560411] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:35.478 [2024-11-05 16:23:48.563082] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:35.478 [2024-11-05 16:23:48.563180] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.478 [2024-11-05 16:23:48.563235] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:35.478 [2024-11-05 16:23:48.563303] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:35.478 { 00:09:35.478 "results": [ 00:09:35.478 { 00:09:35.478 "job": "raid_bdev1", 00:09:35.478 "core_mask": "0x1", 00:09:35.478 "workload": "randrw", 00:09:35.478 "percentage": 50, 00:09:35.478 "status": "finished", 00:09:35.478 "queue_depth": 1, 00:09:35.478 "io_size": 131072, 00:09:35.478 "runtime": 1.434281, 00:09:35.478 "iops": 13886.400224223844, 00:09:35.478 "mibps": 1735.8000280279805, 00:09:35.478 "io_failed": 1, 00:09:35.478 "io_timeout": 0, 00:09:35.478 "avg_latency_us": 100.53233769371454, 00:09:35.478 "min_latency_us": 26.494323144104804, 00:09:35.478 "max_latency_us": 1459.5353711790392 00:09:35.478 } 00:09:35.478 ], 00:09:35.478 "core_count": 1 00:09:35.478 } 00:09:35.478 16:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.478 16:23:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62633 00:09:35.478 16:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 62633 ']' 00:09:35.478 16:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 62633 00:09:35.737 16:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:09:35.737 16:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:35.737 16:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62633 00:09:35.737 16:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:35.737 16:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:35.737 killing process with pid 62633 00:09:35.737 16:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62633' 00:09:35.737 16:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 62633 00:09:35.737 [2024-11-05 16:23:48.606590] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:35.737 16:23:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 62633 00:09:35.737 [2024-11-05 16:23:48.763199] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:37.122 16:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.DqaLY5XgpJ 00:09:37.122 16:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:37.122 16:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:37.122 16:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:09:37.122 16:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:37.122 16:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:37.122 16:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:37.122 16:23:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:09:37.122 00:09:37.122 real 0m4.534s 00:09:37.122 user 0m5.389s 00:09:37.122 sys 0m0.559s 00:09:37.122 ************************************ 00:09:37.122 END TEST raid_read_error_test 00:09:37.122 ************************************ 00:09:37.122 16:23:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:37.122 16:23:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.122 16:23:50 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:09:37.122 16:23:50 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:37.122 16:23:50 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:37.122 16:23:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:37.122 ************************************ 00:09:37.122 START TEST raid_write_error_test 00:09:37.122 ************************************ 00:09:37.122 16:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 2 write 00:09:37.122 16:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:37.122 16:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:37.122 16:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:37.122 16:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:37.122 16:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:37.122 16:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:37.122 16:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:37.122 16:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:37.122 16:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:37.122 16:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:37.122 16:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:37.122 16:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:37.122 16:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:37.122 16:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:37.122 16:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:37.122 16:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:37.122 16:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:37.122 16:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:37.122 16:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:37.122 16:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:37.122 16:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:37.122 16:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:37.122 16:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ygO5HNDc2U 00:09:37.122 16:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62779 00:09:37.122 16:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62779 00:09:37.122 16:23:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:37.122 16:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 62779 ']' 00:09:37.122 16:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.122 16:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:37.122 16:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.122 16:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:37.122 16:23:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.388 [2024-11-05 16:23:50.267387] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:09:37.388 [2024-11-05 16:23:50.267544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62779 ] 00:09:37.388 [2024-11-05 16:23:50.449161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.655 [2024-11-05 16:23:50.597143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.925 [2024-11-05 16:23:50.838124] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:37.925 [2024-11-05 16:23:50.838314] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.197 16:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:38.197 16:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:38.197 16:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:38.197 16:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:38.197 16:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.197 16:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.197 BaseBdev1_malloc 00:09:38.197 16:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.197 16:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:38.197 16:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.197 16:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.197 true 00:09:38.197 16:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.197 16:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:38.197 16:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.197 16:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.197 [2024-11-05 16:23:51.215226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:38.197 [2024-11-05 16:23:51.215309] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.197 [2024-11-05 16:23:51.215332] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:38.197 [2024-11-05 16:23:51.215345] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.197 [2024-11-05 16:23:51.218018] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.197 [2024-11-05 16:23:51.218159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:38.197 BaseBdev1 00:09:38.197 16:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.197 16:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:38.197 16:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:38.197 16:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.197 16:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.197 BaseBdev2_malloc 00:09:38.197 16:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.197 16:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:38.197 16:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.197 16:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.197 true 00:09:38.197 16:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.197 16:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:38.197 16:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.197 16:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.197 [2024-11-05 16:23:51.286896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:38.472 [2024-11-05 16:23:51.287046] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.472 [2024-11-05 16:23:51.287071] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:38.472 [2024-11-05 16:23:51.287084] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.472 [2024-11-05 16:23:51.289805] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.472 [2024-11-05 16:23:51.289846] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:38.472 BaseBdev2 00:09:38.472 16:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.472 16:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:38.472 16:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.472 16:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.472 [2024-11-05 16:23:51.298948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:38.472 [2024-11-05 16:23:51.301241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:38.472 [2024-11-05 16:23:51.301532] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:38.472 [2024-11-05 16:23:51.301606] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:38.472 [2024-11-05 16:23:51.301944] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:38.472 [2024-11-05 16:23:51.302227] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:38.472 [2024-11-05 16:23:51.302248] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:38.472 [2024-11-05 16:23:51.302454] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:38.472 16:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.472 16:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:38.472 16:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:38.472 16:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:38.472 16:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.472 16:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.472 16:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:38.472 16:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.472 16:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.472 16:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.472 16:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.472 16:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:38.472 16:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.472 16:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.472 16:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.472 16:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.472 16:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.472 "name": "raid_bdev1", 00:09:38.472 "uuid": "b6682717-bfb6-4ef5-bc5d-3a8820027bbb", 00:09:38.472 "strip_size_kb": 64, 00:09:38.472 "state": "online", 00:09:38.472 "raid_level": "concat", 00:09:38.472 "superblock": true, 00:09:38.472 "num_base_bdevs": 2, 00:09:38.473 "num_base_bdevs_discovered": 2, 00:09:38.473 "num_base_bdevs_operational": 2, 00:09:38.473 "base_bdevs_list": [ 00:09:38.473 { 00:09:38.473 "name": "BaseBdev1", 00:09:38.473 "uuid": "5ef37700-b0f7-5db7-a42c-d4e023b97d45", 00:09:38.473 "is_configured": true, 00:09:38.473 "data_offset": 2048, 00:09:38.473 "data_size": 63488 00:09:38.473 }, 00:09:38.473 { 00:09:38.473 "name": "BaseBdev2", 00:09:38.473 "uuid": "c6256349-4556-57a6-8e88-8f5307764ba6", 00:09:38.473 "is_configured": true, 00:09:38.473 "data_offset": 2048, 00:09:38.473 "data_size": 63488 00:09:38.473 } 00:09:38.473 ] 00:09:38.473 }' 00:09:38.473 16:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.473 16:23:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.736 16:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:38.736 16:23:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:38.995 [2024-11-05 16:23:51.887811] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:39.933 16:23:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:39.933 16:23:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.933 16:23:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.933 16:23:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.933 16:23:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:39.933 16:23:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:39.933 16:23:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:39.933 16:23:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:39.933 16:23:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:39.933 16:23:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:39.933 16:23:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.933 16:23:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.933 16:23:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:39.933 16:23:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.933 16:23:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.933 16:23:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.933 16:23:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.933 16:23:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.933 16:23:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:39.933 16:23:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.933 16:23:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.933 16:23:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.933 16:23:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.933 "name": "raid_bdev1", 00:09:39.933 "uuid": "b6682717-bfb6-4ef5-bc5d-3a8820027bbb", 00:09:39.933 "strip_size_kb": 64, 00:09:39.933 "state": "online", 00:09:39.933 "raid_level": "concat", 00:09:39.933 "superblock": true, 00:09:39.933 "num_base_bdevs": 2, 00:09:39.933 "num_base_bdevs_discovered": 2, 00:09:39.933 "num_base_bdevs_operational": 2, 00:09:39.933 "base_bdevs_list": [ 00:09:39.933 { 00:09:39.933 "name": "BaseBdev1", 00:09:39.933 "uuid": "5ef37700-b0f7-5db7-a42c-d4e023b97d45", 00:09:39.933 "is_configured": true, 00:09:39.933 "data_offset": 2048, 00:09:39.933 "data_size": 63488 00:09:39.933 }, 00:09:39.933 { 00:09:39.933 "name": "BaseBdev2", 00:09:39.933 "uuid": "c6256349-4556-57a6-8e88-8f5307764ba6", 00:09:39.933 "is_configured": true, 00:09:39.933 "data_offset": 2048, 00:09:39.933 "data_size": 63488 00:09:39.933 } 00:09:39.933 ] 00:09:39.933 }' 00:09:39.933 16:23:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.933 16:23:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.502 16:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:40.502 16:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.502 16:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.502 [2024-11-05 16:23:53.299429] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:40.502 [2024-11-05 16:23:53.299490] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:40.502 [2024-11-05 16:23:53.302846] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:40.502 [2024-11-05 16:23:53.302903] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.502 [2024-11-05 16:23:53.302943] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:40.502 [2024-11-05 16:23:53.302961] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:40.502 { 00:09:40.502 "results": [ 00:09:40.502 { 00:09:40.502 "job": "raid_bdev1", 00:09:40.502 "core_mask": "0x1", 00:09:40.502 "workload": "randrw", 00:09:40.502 "percentage": 50, 00:09:40.502 "status": "finished", 00:09:40.502 "queue_depth": 1, 00:09:40.502 "io_size": 131072, 00:09:40.502 "runtime": 1.411744, 00:09:40.502 "iops": 11454.626334519573, 00:09:40.502 "mibps": 1431.8282918149466, 00:09:40.502 "io_failed": 1, 00:09:40.502 "io_timeout": 0, 00:09:40.502 "avg_latency_us": 122.34043508268645, 00:09:40.502 "min_latency_us": 31.07772925764192, 00:09:40.502 "max_latency_us": 1810.1100436681222 00:09:40.502 } 00:09:40.502 ], 00:09:40.502 "core_count": 1 00:09:40.502 } 00:09:40.502 16:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.502 16:23:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62779 00:09:40.502 16:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 62779 ']' 00:09:40.502 16:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 62779 00:09:40.502 16:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:09:40.502 16:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:40.502 16:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62779 00:09:40.502 16:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:40.502 16:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:40.502 16:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62779' 00:09:40.502 killing process with pid 62779 00:09:40.502 16:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 62779 00:09:40.502 [2024-11-05 16:23:53.347541] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:40.502 16:23:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 62779 00:09:40.502 [2024-11-05 16:23:53.528645] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:42.412 16:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:42.412 16:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ygO5HNDc2U 00:09:42.412 16:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:42.412 16:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:42.412 16:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:42.412 ************************************ 00:09:42.412 END TEST raid_write_error_test 00:09:42.412 ************************************ 00:09:42.412 16:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:42.412 16:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:42.412 16:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:42.412 00:09:42.412 real 0m4.918s 00:09:42.412 user 0m5.815s 00:09:42.412 sys 0m0.662s 00:09:42.412 16:23:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:42.412 16:23:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.412 16:23:55 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:42.412 16:23:55 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:09:42.412 16:23:55 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:42.412 16:23:55 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:42.412 16:23:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:42.412 ************************************ 00:09:42.412 START TEST raid_state_function_test 00:09:42.412 ************************************ 00:09:42.412 16:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 false 00:09:42.412 16:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:42.412 16:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:42.412 16:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:42.412 16:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:42.412 16:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:42.412 16:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:42.412 16:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:42.412 16:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:42.412 16:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:42.412 16:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:42.412 16:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:42.412 16:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:42.412 16:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:42.412 16:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:42.412 16:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:42.412 16:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:42.412 16:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:42.412 16:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:42.412 16:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:42.412 16:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:42.412 16:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:42.412 16:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:42.412 16:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62928 00:09:42.412 16:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:42.412 16:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62928' 00:09:42.412 Process raid pid: 62928 00:09:42.412 16:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62928 00:09:42.412 16:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 62928 ']' 00:09:42.412 16:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.412 16:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:42.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.412 16:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.412 16:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:42.412 16:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.412 [2024-11-05 16:23:55.240972] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:09:42.412 [2024-11-05 16:23:55.241211] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:42.412 [2024-11-05 16:23:55.417995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.671 [2024-11-05 16:23:55.547674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.931 [2024-11-05 16:23:55.789239] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:42.931 [2024-11-05 16:23:55.789291] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.190 16:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:43.191 16:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:09:43.191 16:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:43.191 16:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.191 16:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.191 [2024-11-05 16:23:56.131461] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:43.191 [2024-11-05 16:23:56.131534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:43.191 [2024-11-05 16:23:56.131547] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:43.191 [2024-11-05 16:23:56.131558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:43.191 16:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.191 16:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:43.191 16:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.191 16:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.191 16:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.191 16:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.191 16:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:43.191 16:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.191 16:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.191 16:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.191 16:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.191 16:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.191 16:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.191 16:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.191 16:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.191 16:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.191 16:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.191 "name": "Existed_Raid", 00:09:43.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.191 "strip_size_kb": 0, 00:09:43.191 "state": "configuring", 00:09:43.191 "raid_level": "raid1", 00:09:43.191 "superblock": false, 00:09:43.191 "num_base_bdevs": 2, 00:09:43.191 "num_base_bdevs_discovered": 0, 00:09:43.191 "num_base_bdevs_operational": 2, 00:09:43.191 "base_bdevs_list": [ 00:09:43.191 { 00:09:43.191 "name": "BaseBdev1", 00:09:43.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.191 "is_configured": false, 00:09:43.191 "data_offset": 0, 00:09:43.191 "data_size": 0 00:09:43.191 }, 00:09:43.191 { 00:09:43.191 "name": "BaseBdev2", 00:09:43.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.191 "is_configured": false, 00:09:43.191 "data_offset": 0, 00:09:43.191 "data_size": 0 00:09:43.191 } 00:09:43.191 ] 00:09:43.191 }' 00:09:43.191 16:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.191 16:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.761 [2024-11-05 16:23:56.602656] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:43.761 [2024-11-05 16:23:56.602704] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.761 [2024-11-05 16:23:56.614661] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:43.761 [2024-11-05 16:23:56.614720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:43.761 [2024-11-05 16:23:56.614730] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:43.761 [2024-11-05 16:23:56.614743] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.761 [2024-11-05 16:23:56.664406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:43.761 BaseBdev1 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.761 [ 00:09:43.761 { 00:09:43.761 "name": "BaseBdev1", 00:09:43.761 "aliases": [ 00:09:43.761 "947f0fb9-2561-48f3-9742-1f52ed67e6b9" 00:09:43.761 ], 00:09:43.761 "product_name": "Malloc disk", 00:09:43.761 "block_size": 512, 00:09:43.761 "num_blocks": 65536, 00:09:43.761 "uuid": "947f0fb9-2561-48f3-9742-1f52ed67e6b9", 00:09:43.761 "assigned_rate_limits": { 00:09:43.761 "rw_ios_per_sec": 0, 00:09:43.761 "rw_mbytes_per_sec": 0, 00:09:43.761 "r_mbytes_per_sec": 0, 00:09:43.761 "w_mbytes_per_sec": 0 00:09:43.761 }, 00:09:43.761 "claimed": true, 00:09:43.761 "claim_type": "exclusive_write", 00:09:43.761 "zoned": false, 00:09:43.761 "supported_io_types": { 00:09:43.761 "read": true, 00:09:43.761 "write": true, 00:09:43.761 "unmap": true, 00:09:43.761 "flush": true, 00:09:43.761 "reset": true, 00:09:43.761 "nvme_admin": false, 00:09:43.761 "nvme_io": false, 00:09:43.761 "nvme_io_md": false, 00:09:43.761 "write_zeroes": true, 00:09:43.761 "zcopy": true, 00:09:43.761 "get_zone_info": false, 00:09:43.761 "zone_management": false, 00:09:43.761 "zone_append": false, 00:09:43.761 "compare": false, 00:09:43.761 "compare_and_write": false, 00:09:43.761 "abort": true, 00:09:43.761 "seek_hole": false, 00:09:43.761 "seek_data": false, 00:09:43.761 "copy": true, 00:09:43.761 "nvme_iov_md": false 00:09:43.761 }, 00:09:43.761 "memory_domains": [ 00:09:43.761 { 00:09:43.761 "dma_device_id": "system", 00:09:43.761 "dma_device_type": 1 00:09:43.761 }, 00:09:43.761 { 00:09:43.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.761 "dma_device_type": 2 00:09:43.761 } 00:09:43.761 ], 00:09:43.761 "driver_specific": {} 00:09:43.761 } 00:09:43.761 ] 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.761 "name": "Existed_Raid", 00:09:43.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.761 "strip_size_kb": 0, 00:09:43.761 "state": "configuring", 00:09:43.761 "raid_level": "raid1", 00:09:43.761 "superblock": false, 00:09:43.761 "num_base_bdevs": 2, 00:09:43.761 "num_base_bdevs_discovered": 1, 00:09:43.761 "num_base_bdevs_operational": 2, 00:09:43.761 "base_bdevs_list": [ 00:09:43.761 { 00:09:43.761 "name": "BaseBdev1", 00:09:43.761 "uuid": "947f0fb9-2561-48f3-9742-1f52ed67e6b9", 00:09:43.761 "is_configured": true, 00:09:43.761 "data_offset": 0, 00:09:43.761 "data_size": 65536 00:09:43.761 }, 00:09:43.761 { 00:09:43.761 "name": "BaseBdev2", 00:09:43.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.761 "is_configured": false, 00:09:43.761 "data_offset": 0, 00:09:43.761 "data_size": 0 00:09:43.761 } 00:09:43.761 ] 00:09:43.761 }' 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.761 16:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.367 16:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:44.367 16:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.367 16:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.367 [2024-11-05 16:23:57.147693] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:44.367 [2024-11-05 16:23:57.147819] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:44.367 16:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.367 16:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:44.367 16:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.367 16:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.367 [2024-11-05 16:23:57.155739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:44.367 [2024-11-05 16:23:57.158049] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:44.367 [2024-11-05 16:23:57.158147] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:44.367 16:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.367 16:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:44.367 16:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:44.367 16:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:44.367 16:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.367 16:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.367 16:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:44.367 16:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:44.367 16:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:44.367 16:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.367 16:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.367 16:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.367 16:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.367 16:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.367 16:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.367 16:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.367 16:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.367 16:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.367 16:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.367 "name": "Existed_Raid", 00:09:44.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.367 "strip_size_kb": 0, 00:09:44.367 "state": "configuring", 00:09:44.367 "raid_level": "raid1", 00:09:44.367 "superblock": false, 00:09:44.367 "num_base_bdevs": 2, 00:09:44.367 "num_base_bdevs_discovered": 1, 00:09:44.367 "num_base_bdevs_operational": 2, 00:09:44.367 "base_bdevs_list": [ 00:09:44.367 { 00:09:44.367 "name": "BaseBdev1", 00:09:44.367 "uuid": "947f0fb9-2561-48f3-9742-1f52ed67e6b9", 00:09:44.367 "is_configured": true, 00:09:44.367 "data_offset": 0, 00:09:44.367 "data_size": 65536 00:09:44.367 }, 00:09:44.367 { 00:09:44.367 "name": "BaseBdev2", 00:09:44.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.367 "is_configured": false, 00:09:44.367 "data_offset": 0, 00:09:44.367 "data_size": 0 00:09:44.367 } 00:09:44.367 ] 00:09:44.367 }' 00:09:44.367 16:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.367 16:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.624 16:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:44.624 16:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.624 16:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.624 [2024-11-05 16:23:57.644241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:44.625 [2024-11-05 16:23:57.644305] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:44.625 [2024-11-05 16:23:57.644314] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:44.625 [2024-11-05 16:23:57.644674] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:44.625 [2024-11-05 16:23:57.644863] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:44.625 [2024-11-05 16:23:57.644880] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:44.625 [2024-11-05 16:23:57.645207] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:44.625 BaseBdev2 00:09:44.625 16:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.625 16:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:44.625 16:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:44.625 16:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:44.625 16:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:44.625 16:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:44.625 16:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:44.625 16:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:44.625 16:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.625 16:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.625 16:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.625 16:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:44.625 16:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.625 16:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.625 [ 00:09:44.625 { 00:09:44.625 "name": "BaseBdev2", 00:09:44.625 "aliases": [ 00:09:44.625 "7a534119-c2d3-4f96-88df-233ea91bb61d" 00:09:44.625 ], 00:09:44.625 "product_name": "Malloc disk", 00:09:44.625 "block_size": 512, 00:09:44.625 "num_blocks": 65536, 00:09:44.625 "uuid": "7a534119-c2d3-4f96-88df-233ea91bb61d", 00:09:44.625 "assigned_rate_limits": { 00:09:44.625 "rw_ios_per_sec": 0, 00:09:44.625 "rw_mbytes_per_sec": 0, 00:09:44.625 "r_mbytes_per_sec": 0, 00:09:44.625 "w_mbytes_per_sec": 0 00:09:44.625 }, 00:09:44.625 "claimed": true, 00:09:44.625 "claim_type": "exclusive_write", 00:09:44.625 "zoned": false, 00:09:44.625 "supported_io_types": { 00:09:44.625 "read": true, 00:09:44.625 "write": true, 00:09:44.625 "unmap": true, 00:09:44.625 "flush": true, 00:09:44.625 "reset": true, 00:09:44.625 "nvme_admin": false, 00:09:44.625 "nvme_io": false, 00:09:44.625 "nvme_io_md": false, 00:09:44.625 "write_zeroes": true, 00:09:44.625 "zcopy": true, 00:09:44.625 "get_zone_info": false, 00:09:44.625 "zone_management": false, 00:09:44.625 "zone_append": false, 00:09:44.625 "compare": false, 00:09:44.625 "compare_and_write": false, 00:09:44.625 "abort": true, 00:09:44.625 "seek_hole": false, 00:09:44.625 "seek_data": false, 00:09:44.625 "copy": true, 00:09:44.625 "nvme_iov_md": false 00:09:44.625 }, 00:09:44.625 "memory_domains": [ 00:09:44.625 { 00:09:44.625 "dma_device_id": "system", 00:09:44.625 "dma_device_type": 1 00:09:44.625 }, 00:09:44.625 { 00:09:44.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.625 "dma_device_type": 2 00:09:44.625 } 00:09:44.625 ], 00:09:44.625 "driver_specific": {} 00:09:44.625 } 00:09:44.625 ] 00:09:44.625 16:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.625 16:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:44.625 16:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:44.625 16:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:44.625 16:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:44.625 16:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.625 16:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:44.625 16:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:44.625 16:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:44.625 16:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:44.625 16:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.625 16:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.625 16:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.625 16:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.625 16:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.625 16:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.625 16:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.625 16:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.625 16:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.883 16:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.883 "name": "Existed_Raid", 00:09:44.883 "uuid": "a442d0f8-81ce-440e-a59b-c7feb0cbecb2", 00:09:44.883 "strip_size_kb": 0, 00:09:44.883 "state": "online", 00:09:44.883 "raid_level": "raid1", 00:09:44.883 "superblock": false, 00:09:44.883 "num_base_bdevs": 2, 00:09:44.883 "num_base_bdevs_discovered": 2, 00:09:44.883 "num_base_bdevs_operational": 2, 00:09:44.883 "base_bdevs_list": [ 00:09:44.883 { 00:09:44.883 "name": "BaseBdev1", 00:09:44.883 "uuid": "947f0fb9-2561-48f3-9742-1f52ed67e6b9", 00:09:44.883 "is_configured": true, 00:09:44.883 "data_offset": 0, 00:09:44.883 "data_size": 65536 00:09:44.883 }, 00:09:44.883 { 00:09:44.883 "name": "BaseBdev2", 00:09:44.883 "uuid": "7a534119-c2d3-4f96-88df-233ea91bb61d", 00:09:44.883 "is_configured": true, 00:09:44.883 "data_offset": 0, 00:09:44.883 "data_size": 65536 00:09:44.883 } 00:09:44.883 ] 00:09:44.883 }' 00:09:44.883 16:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.883 16:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.141 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:45.141 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:45.141 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:45.141 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:45.141 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:45.141 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:45.141 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:45.141 16:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.141 16:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.141 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:45.141 [2024-11-05 16:23:58.183782] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:45.141 16:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.141 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:45.141 "name": "Existed_Raid", 00:09:45.141 "aliases": [ 00:09:45.141 "a442d0f8-81ce-440e-a59b-c7feb0cbecb2" 00:09:45.141 ], 00:09:45.141 "product_name": "Raid Volume", 00:09:45.141 "block_size": 512, 00:09:45.141 "num_blocks": 65536, 00:09:45.141 "uuid": "a442d0f8-81ce-440e-a59b-c7feb0cbecb2", 00:09:45.141 "assigned_rate_limits": { 00:09:45.141 "rw_ios_per_sec": 0, 00:09:45.141 "rw_mbytes_per_sec": 0, 00:09:45.141 "r_mbytes_per_sec": 0, 00:09:45.141 "w_mbytes_per_sec": 0 00:09:45.141 }, 00:09:45.141 "claimed": false, 00:09:45.141 "zoned": false, 00:09:45.141 "supported_io_types": { 00:09:45.141 "read": true, 00:09:45.141 "write": true, 00:09:45.141 "unmap": false, 00:09:45.141 "flush": false, 00:09:45.141 "reset": true, 00:09:45.141 "nvme_admin": false, 00:09:45.141 "nvme_io": false, 00:09:45.141 "nvme_io_md": false, 00:09:45.141 "write_zeroes": true, 00:09:45.141 "zcopy": false, 00:09:45.141 "get_zone_info": false, 00:09:45.141 "zone_management": false, 00:09:45.141 "zone_append": false, 00:09:45.141 "compare": false, 00:09:45.141 "compare_and_write": false, 00:09:45.141 "abort": false, 00:09:45.141 "seek_hole": false, 00:09:45.141 "seek_data": false, 00:09:45.141 "copy": false, 00:09:45.141 "nvme_iov_md": false 00:09:45.141 }, 00:09:45.141 "memory_domains": [ 00:09:45.141 { 00:09:45.141 "dma_device_id": "system", 00:09:45.141 "dma_device_type": 1 00:09:45.141 }, 00:09:45.141 { 00:09:45.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.141 "dma_device_type": 2 00:09:45.141 }, 00:09:45.141 { 00:09:45.141 "dma_device_id": "system", 00:09:45.141 "dma_device_type": 1 00:09:45.141 }, 00:09:45.141 { 00:09:45.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.141 "dma_device_type": 2 00:09:45.141 } 00:09:45.141 ], 00:09:45.141 "driver_specific": { 00:09:45.141 "raid": { 00:09:45.141 "uuid": "a442d0f8-81ce-440e-a59b-c7feb0cbecb2", 00:09:45.141 "strip_size_kb": 0, 00:09:45.141 "state": "online", 00:09:45.141 "raid_level": "raid1", 00:09:45.141 "superblock": false, 00:09:45.141 "num_base_bdevs": 2, 00:09:45.141 "num_base_bdevs_discovered": 2, 00:09:45.141 "num_base_bdevs_operational": 2, 00:09:45.141 "base_bdevs_list": [ 00:09:45.141 { 00:09:45.141 "name": "BaseBdev1", 00:09:45.141 "uuid": "947f0fb9-2561-48f3-9742-1f52ed67e6b9", 00:09:45.141 "is_configured": true, 00:09:45.141 "data_offset": 0, 00:09:45.141 "data_size": 65536 00:09:45.141 }, 00:09:45.141 { 00:09:45.141 "name": "BaseBdev2", 00:09:45.141 "uuid": "7a534119-c2d3-4f96-88df-233ea91bb61d", 00:09:45.141 "is_configured": true, 00:09:45.141 "data_offset": 0, 00:09:45.141 "data_size": 65536 00:09:45.141 } 00:09:45.141 ] 00:09:45.141 } 00:09:45.141 } 00:09:45.141 }' 00:09:45.141 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:45.400 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:45.400 BaseBdev2' 00:09:45.400 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.400 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:45.400 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.400 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:45.400 16:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.400 16:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.400 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.400 16:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.400 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.400 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.400 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.400 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:45.400 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.400 16:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.400 16:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.400 16:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.400 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.400 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.400 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:45.400 16:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.400 16:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.400 [2024-11-05 16:23:58.435088] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:45.658 16:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.658 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:45.658 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:45.658 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:45.658 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:45.658 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:45.658 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:09:45.658 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.658 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.658 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.658 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.658 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:45.658 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.658 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.658 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.658 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.658 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.658 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.658 16:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.658 16:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.658 16:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.658 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.658 "name": "Existed_Raid", 00:09:45.658 "uuid": "a442d0f8-81ce-440e-a59b-c7feb0cbecb2", 00:09:45.658 "strip_size_kb": 0, 00:09:45.658 "state": "online", 00:09:45.658 "raid_level": "raid1", 00:09:45.658 "superblock": false, 00:09:45.658 "num_base_bdevs": 2, 00:09:45.658 "num_base_bdevs_discovered": 1, 00:09:45.658 "num_base_bdevs_operational": 1, 00:09:45.658 "base_bdevs_list": [ 00:09:45.658 { 00:09:45.658 "name": null, 00:09:45.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.658 "is_configured": false, 00:09:45.658 "data_offset": 0, 00:09:45.658 "data_size": 65536 00:09:45.658 }, 00:09:45.658 { 00:09:45.658 "name": "BaseBdev2", 00:09:45.658 "uuid": "7a534119-c2d3-4f96-88df-233ea91bb61d", 00:09:45.658 "is_configured": true, 00:09:45.658 "data_offset": 0, 00:09:45.658 "data_size": 65536 00:09:45.658 } 00:09:45.658 ] 00:09:45.658 }' 00:09:45.658 16:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.658 16:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.225 16:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:46.225 16:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:46.225 16:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.225 16:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.225 16:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:46.225 16:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.225 16:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.225 16:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:46.225 16:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:46.225 16:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:46.225 16:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.225 16:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.225 [2024-11-05 16:23:59.088624] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:46.225 [2024-11-05 16:23:59.088797] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:46.225 [2024-11-05 16:23:59.204736] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:46.225 [2024-11-05 16:23:59.204915] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:46.225 [2024-11-05 16:23:59.204980] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:46.225 16:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.225 16:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:46.225 16:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:46.225 16:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.225 16:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:46.225 16:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.225 16:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.225 16:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.225 16:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:46.225 16:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:46.225 16:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:46.225 16:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62928 00:09:46.225 16:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 62928 ']' 00:09:46.225 16:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 62928 00:09:46.225 16:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:09:46.225 16:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:46.225 16:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62928 00:09:46.225 16:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:46.225 16:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:46.225 16:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62928' 00:09:46.225 killing process with pid 62928 00:09:46.225 16:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 62928 00:09:46.225 [2024-11-05 16:23:59.306313] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:46.225 16:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 62928 00:09:46.485 [2024-11-05 16:23:59.326791] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:47.860 16:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:47.860 00:09:47.860 real 0m5.476s 00:09:47.860 user 0m7.854s 00:09:47.860 sys 0m0.878s 00:09:47.860 16:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:47.860 16:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.860 ************************************ 00:09:47.860 END TEST raid_state_function_test 00:09:47.860 ************************************ 00:09:47.860 16:24:00 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:09:47.860 16:24:00 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:47.860 16:24:00 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:47.860 16:24:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:47.860 ************************************ 00:09:47.860 START TEST raid_state_function_test_sb 00:09:47.860 ************************************ 00:09:47.860 16:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:09:47.860 16:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:47.860 16:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:47.860 16:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:47.860 16:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:47.860 16:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:47.860 16:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:47.860 16:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:47.860 16:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:47.860 16:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:47.860 16:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:47.860 16:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:47.860 16:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:47.861 16:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:47.861 16:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:47.861 16:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:47.861 16:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:47.861 16:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:47.861 16:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:47.861 16:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:47.861 16:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:47.861 16:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:47.861 16:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:47.861 16:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63181 00:09:47.861 16:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:47.861 16:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63181' 00:09:47.861 Process raid pid: 63181 00:09:47.861 16:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63181 00:09:47.861 16:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 63181 ']' 00:09:47.861 16:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.861 16:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:47.861 16:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.861 16:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:47.861 16:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.861 [2024-11-05 16:24:00.772856] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:09:47.861 [2024-11-05 16:24:00.773115] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.861 [2024-11-05 16:24:00.945164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.118 [2024-11-05 16:24:01.087082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.378 [2024-11-05 16:24:01.315019] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.378 [2024-11-05 16:24:01.315175] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.945 16:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:48.945 16:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:09:48.945 16:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:48.945 16:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.945 16:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.945 [2024-11-05 16:24:01.779061] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:48.945 [2024-11-05 16:24:01.779157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:48.945 [2024-11-05 16:24:01.779174] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:48.945 [2024-11-05 16:24:01.779186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:48.945 16:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.945 16:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:48.945 16:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.945 16:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.945 16:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.945 16:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.945 16:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:48.945 16:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.945 16:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.945 16:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.945 16:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.945 16:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.945 16:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.945 16:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.945 16:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.945 16:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.945 16:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.945 "name": "Existed_Raid", 00:09:48.945 "uuid": "3f8801c0-ab82-4255-99e4-230028e5f950", 00:09:48.945 "strip_size_kb": 0, 00:09:48.945 "state": "configuring", 00:09:48.945 "raid_level": "raid1", 00:09:48.945 "superblock": true, 00:09:48.945 "num_base_bdevs": 2, 00:09:48.945 "num_base_bdevs_discovered": 0, 00:09:48.945 "num_base_bdevs_operational": 2, 00:09:48.945 "base_bdevs_list": [ 00:09:48.945 { 00:09:48.945 "name": "BaseBdev1", 00:09:48.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.945 "is_configured": false, 00:09:48.945 "data_offset": 0, 00:09:48.945 "data_size": 0 00:09:48.945 }, 00:09:48.945 { 00:09:48.945 "name": "BaseBdev2", 00:09:48.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.945 "is_configured": false, 00:09:48.945 "data_offset": 0, 00:09:48.945 "data_size": 0 00:09:48.945 } 00:09:48.945 ] 00:09:48.945 }' 00:09:48.945 16:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.945 16:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.205 16:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:49.205 16:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.205 16:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.205 [2024-11-05 16:24:02.206258] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:49.205 [2024-11-05 16:24:02.206375] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:49.205 16:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.205 16:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:49.205 16:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.205 16:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.205 [2024-11-05 16:24:02.214243] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:49.205 [2024-11-05 16:24:02.214360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:49.205 [2024-11-05 16:24:02.214413] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:49.205 [2024-11-05 16:24:02.214468] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:49.205 16:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.205 16:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:49.205 16:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.205 16:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.205 [2024-11-05 16:24:02.264400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:49.205 BaseBdev1 00:09:49.205 16:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.205 16:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:49.205 16:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:49.205 16:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:49.205 16:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:49.205 16:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:49.205 16:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:49.205 16:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:49.205 16:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.205 16:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.205 16:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.205 16:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:49.205 16:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.205 16:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.205 [ 00:09:49.205 { 00:09:49.205 "name": "BaseBdev1", 00:09:49.205 "aliases": [ 00:09:49.205 "187646e4-ab51-4e6f-8bde-a6c55d21aec2" 00:09:49.205 ], 00:09:49.205 "product_name": "Malloc disk", 00:09:49.205 "block_size": 512, 00:09:49.205 "num_blocks": 65536, 00:09:49.205 "uuid": "187646e4-ab51-4e6f-8bde-a6c55d21aec2", 00:09:49.206 "assigned_rate_limits": { 00:09:49.206 "rw_ios_per_sec": 0, 00:09:49.206 "rw_mbytes_per_sec": 0, 00:09:49.206 "r_mbytes_per_sec": 0, 00:09:49.206 "w_mbytes_per_sec": 0 00:09:49.206 }, 00:09:49.206 "claimed": true, 00:09:49.206 "claim_type": "exclusive_write", 00:09:49.206 "zoned": false, 00:09:49.206 "supported_io_types": { 00:09:49.206 "read": true, 00:09:49.206 "write": true, 00:09:49.206 "unmap": true, 00:09:49.206 "flush": true, 00:09:49.206 "reset": true, 00:09:49.206 "nvme_admin": false, 00:09:49.206 "nvme_io": false, 00:09:49.206 "nvme_io_md": false, 00:09:49.206 "write_zeroes": true, 00:09:49.206 "zcopy": true, 00:09:49.206 "get_zone_info": false, 00:09:49.206 "zone_management": false, 00:09:49.206 "zone_append": false, 00:09:49.206 "compare": false, 00:09:49.206 "compare_and_write": false, 00:09:49.206 "abort": true, 00:09:49.206 "seek_hole": false, 00:09:49.206 "seek_data": false, 00:09:49.206 "copy": true, 00:09:49.206 "nvme_iov_md": false 00:09:49.206 }, 00:09:49.464 "memory_domains": [ 00:09:49.464 { 00:09:49.464 "dma_device_id": "system", 00:09:49.464 "dma_device_type": 1 00:09:49.464 }, 00:09:49.464 { 00:09:49.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.464 "dma_device_type": 2 00:09:49.464 } 00:09:49.464 ], 00:09:49.464 "driver_specific": {} 00:09:49.464 } 00:09:49.464 ] 00:09:49.464 16:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.464 16:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:49.464 16:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:49.464 16:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.464 16:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.464 16:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:49.464 16:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:49.464 16:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:49.464 16:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.464 16:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.464 16:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.464 16:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.464 16:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.464 16:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.464 16:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.464 16:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.464 16:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.464 16:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.464 "name": "Existed_Raid", 00:09:49.465 "uuid": "0603a23b-993e-4943-a40d-eefe246b2589", 00:09:49.465 "strip_size_kb": 0, 00:09:49.465 "state": "configuring", 00:09:49.465 "raid_level": "raid1", 00:09:49.465 "superblock": true, 00:09:49.465 "num_base_bdevs": 2, 00:09:49.465 "num_base_bdevs_discovered": 1, 00:09:49.465 "num_base_bdevs_operational": 2, 00:09:49.465 "base_bdevs_list": [ 00:09:49.465 { 00:09:49.465 "name": "BaseBdev1", 00:09:49.465 "uuid": "187646e4-ab51-4e6f-8bde-a6c55d21aec2", 00:09:49.465 "is_configured": true, 00:09:49.465 "data_offset": 2048, 00:09:49.465 "data_size": 63488 00:09:49.465 }, 00:09:49.465 { 00:09:49.465 "name": "BaseBdev2", 00:09:49.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.465 "is_configured": false, 00:09:49.465 "data_offset": 0, 00:09:49.465 "data_size": 0 00:09:49.465 } 00:09:49.465 ] 00:09:49.465 }' 00:09:49.465 16:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.465 16:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.723 16:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:49.723 16:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.723 16:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.723 [2024-11-05 16:24:02.723688] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:49.723 [2024-11-05 16:24:02.723805] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:49.723 16:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.723 16:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:49.723 16:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.723 16:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.723 [2024-11-05 16:24:02.735731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:49.723 [2024-11-05 16:24:02.737953] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:49.723 [2024-11-05 16:24:02.738042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:49.723 16:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.723 16:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:49.723 16:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:49.723 16:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:49.723 16:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.723 16:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.723 16:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:49.723 16:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:49.723 16:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:49.723 16:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.723 16:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.723 16:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.723 16:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.723 16:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.723 16:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.723 16:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.723 16:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.723 16:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.723 16:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.723 "name": "Existed_Raid", 00:09:49.723 "uuid": "d49700d6-5918-4245-8d50-2bd7707a60a8", 00:09:49.723 "strip_size_kb": 0, 00:09:49.723 "state": "configuring", 00:09:49.723 "raid_level": "raid1", 00:09:49.723 "superblock": true, 00:09:49.723 "num_base_bdevs": 2, 00:09:49.723 "num_base_bdevs_discovered": 1, 00:09:49.723 "num_base_bdevs_operational": 2, 00:09:49.723 "base_bdevs_list": [ 00:09:49.723 { 00:09:49.723 "name": "BaseBdev1", 00:09:49.723 "uuid": "187646e4-ab51-4e6f-8bde-a6c55d21aec2", 00:09:49.723 "is_configured": true, 00:09:49.723 "data_offset": 2048, 00:09:49.723 "data_size": 63488 00:09:49.723 }, 00:09:49.723 { 00:09:49.723 "name": "BaseBdev2", 00:09:49.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.723 "is_configured": false, 00:09:49.723 "data_offset": 0, 00:09:49.723 "data_size": 0 00:09:49.723 } 00:09:49.723 ] 00:09:49.723 }' 00:09:49.723 16:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.723 16:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.290 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:50.290 16:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.290 16:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.290 [2024-11-05 16:24:03.211824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:50.290 [2024-11-05 16:24:03.212165] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:50.290 [2024-11-05 16:24:03.212185] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:50.290 BaseBdev2 00:09:50.290 [2024-11-05 16:24:03.212545] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:50.290 [2024-11-05 16:24:03.212754] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:50.290 [2024-11-05 16:24:03.212772] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:50.290 [2024-11-05 16:24:03.212960] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.290 16:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.290 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:50.290 16:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:50.290 16:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:50.290 16:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:50.290 16:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:50.290 16:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:50.290 16:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:50.290 16:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.290 16:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.290 16:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.290 16:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:50.290 16:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.290 16:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.290 [ 00:09:50.290 { 00:09:50.290 "name": "BaseBdev2", 00:09:50.290 "aliases": [ 00:09:50.290 "8b4e847a-3799-41b2-929f-8250d906afa9" 00:09:50.290 ], 00:09:50.290 "product_name": "Malloc disk", 00:09:50.290 "block_size": 512, 00:09:50.290 "num_blocks": 65536, 00:09:50.290 "uuid": "8b4e847a-3799-41b2-929f-8250d906afa9", 00:09:50.290 "assigned_rate_limits": { 00:09:50.290 "rw_ios_per_sec": 0, 00:09:50.290 "rw_mbytes_per_sec": 0, 00:09:50.290 "r_mbytes_per_sec": 0, 00:09:50.290 "w_mbytes_per_sec": 0 00:09:50.290 }, 00:09:50.290 "claimed": true, 00:09:50.290 "claim_type": "exclusive_write", 00:09:50.290 "zoned": false, 00:09:50.290 "supported_io_types": { 00:09:50.290 "read": true, 00:09:50.290 "write": true, 00:09:50.290 "unmap": true, 00:09:50.290 "flush": true, 00:09:50.290 "reset": true, 00:09:50.290 "nvme_admin": false, 00:09:50.290 "nvme_io": false, 00:09:50.290 "nvme_io_md": false, 00:09:50.290 "write_zeroes": true, 00:09:50.290 "zcopy": true, 00:09:50.290 "get_zone_info": false, 00:09:50.290 "zone_management": false, 00:09:50.290 "zone_append": false, 00:09:50.290 "compare": false, 00:09:50.290 "compare_and_write": false, 00:09:50.290 "abort": true, 00:09:50.290 "seek_hole": false, 00:09:50.290 "seek_data": false, 00:09:50.290 "copy": true, 00:09:50.290 "nvme_iov_md": false 00:09:50.290 }, 00:09:50.290 "memory_domains": [ 00:09:50.290 { 00:09:50.290 "dma_device_id": "system", 00:09:50.290 "dma_device_type": 1 00:09:50.290 }, 00:09:50.290 { 00:09:50.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.290 "dma_device_type": 2 00:09:50.290 } 00:09:50.290 ], 00:09:50.290 "driver_specific": {} 00:09:50.290 } 00:09:50.290 ] 00:09:50.290 16:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.291 16:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:50.291 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:50.291 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:50.291 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:50.291 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.291 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.291 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:50.291 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:50.291 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:50.291 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.291 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.291 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.291 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.291 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.291 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.291 16:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.291 16:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.291 16:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.291 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.291 "name": "Existed_Raid", 00:09:50.291 "uuid": "d49700d6-5918-4245-8d50-2bd7707a60a8", 00:09:50.291 "strip_size_kb": 0, 00:09:50.291 "state": "online", 00:09:50.291 "raid_level": "raid1", 00:09:50.291 "superblock": true, 00:09:50.291 "num_base_bdevs": 2, 00:09:50.291 "num_base_bdevs_discovered": 2, 00:09:50.291 "num_base_bdevs_operational": 2, 00:09:50.291 "base_bdevs_list": [ 00:09:50.291 { 00:09:50.291 "name": "BaseBdev1", 00:09:50.291 "uuid": "187646e4-ab51-4e6f-8bde-a6c55d21aec2", 00:09:50.291 "is_configured": true, 00:09:50.291 "data_offset": 2048, 00:09:50.291 "data_size": 63488 00:09:50.291 }, 00:09:50.291 { 00:09:50.291 "name": "BaseBdev2", 00:09:50.291 "uuid": "8b4e847a-3799-41b2-929f-8250d906afa9", 00:09:50.291 "is_configured": true, 00:09:50.291 "data_offset": 2048, 00:09:50.291 "data_size": 63488 00:09:50.291 } 00:09:50.291 ] 00:09:50.291 }' 00:09:50.291 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.291 16:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.859 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:50.859 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:50.859 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:50.859 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:50.859 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:50.859 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:50.859 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:50.859 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:50.859 16:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.859 16:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.859 [2024-11-05 16:24:03.687334] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:50.859 16:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.859 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:50.859 "name": "Existed_Raid", 00:09:50.859 "aliases": [ 00:09:50.859 "d49700d6-5918-4245-8d50-2bd7707a60a8" 00:09:50.859 ], 00:09:50.859 "product_name": "Raid Volume", 00:09:50.859 "block_size": 512, 00:09:50.859 "num_blocks": 63488, 00:09:50.859 "uuid": "d49700d6-5918-4245-8d50-2bd7707a60a8", 00:09:50.859 "assigned_rate_limits": { 00:09:50.859 "rw_ios_per_sec": 0, 00:09:50.859 "rw_mbytes_per_sec": 0, 00:09:50.859 "r_mbytes_per_sec": 0, 00:09:50.859 "w_mbytes_per_sec": 0 00:09:50.859 }, 00:09:50.859 "claimed": false, 00:09:50.859 "zoned": false, 00:09:50.859 "supported_io_types": { 00:09:50.859 "read": true, 00:09:50.859 "write": true, 00:09:50.859 "unmap": false, 00:09:50.859 "flush": false, 00:09:50.859 "reset": true, 00:09:50.859 "nvme_admin": false, 00:09:50.859 "nvme_io": false, 00:09:50.859 "nvme_io_md": false, 00:09:50.859 "write_zeroes": true, 00:09:50.859 "zcopy": false, 00:09:50.859 "get_zone_info": false, 00:09:50.859 "zone_management": false, 00:09:50.859 "zone_append": false, 00:09:50.859 "compare": false, 00:09:50.859 "compare_and_write": false, 00:09:50.859 "abort": false, 00:09:50.859 "seek_hole": false, 00:09:50.859 "seek_data": false, 00:09:50.859 "copy": false, 00:09:50.859 "nvme_iov_md": false 00:09:50.859 }, 00:09:50.859 "memory_domains": [ 00:09:50.859 { 00:09:50.859 "dma_device_id": "system", 00:09:50.859 "dma_device_type": 1 00:09:50.859 }, 00:09:50.859 { 00:09:50.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.859 "dma_device_type": 2 00:09:50.859 }, 00:09:50.859 { 00:09:50.859 "dma_device_id": "system", 00:09:50.859 "dma_device_type": 1 00:09:50.859 }, 00:09:50.859 { 00:09:50.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.859 "dma_device_type": 2 00:09:50.859 } 00:09:50.859 ], 00:09:50.859 "driver_specific": { 00:09:50.859 "raid": { 00:09:50.859 "uuid": "d49700d6-5918-4245-8d50-2bd7707a60a8", 00:09:50.859 "strip_size_kb": 0, 00:09:50.859 "state": "online", 00:09:50.859 "raid_level": "raid1", 00:09:50.859 "superblock": true, 00:09:50.859 "num_base_bdevs": 2, 00:09:50.859 "num_base_bdevs_discovered": 2, 00:09:50.859 "num_base_bdevs_operational": 2, 00:09:50.859 "base_bdevs_list": [ 00:09:50.859 { 00:09:50.859 "name": "BaseBdev1", 00:09:50.859 "uuid": "187646e4-ab51-4e6f-8bde-a6c55d21aec2", 00:09:50.859 "is_configured": true, 00:09:50.859 "data_offset": 2048, 00:09:50.859 "data_size": 63488 00:09:50.859 }, 00:09:50.859 { 00:09:50.859 "name": "BaseBdev2", 00:09:50.859 "uuid": "8b4e847a-3799-41b2-929f-8250d906afa9", 00:09:50.859 "is_configured": true, 00:09:50.859 "data_offset": 2048, 00:09:50.859 "data_size": 63488 00:09:50.859 } 00:09:50.859 ] 00:09:50.859 } 00:09:50.859 } 00:09:50.859 }' 00:09:50.860 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:50.860 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:50.860 BaseBdev2' 00:09:50.860 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.860 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:50.860 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.860 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.860 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:50.860 16:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.860 16:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.860 16:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.860 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.860 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.860 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.860 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.860 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:50.860 16:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.860 16:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.860 16:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.860 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.860 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.860 16:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:50.860 16:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.860 16:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.860 [2024-11-05 16:24:03.902741] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:51.120 16:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.120 16:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:51.120 16:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:51.120 16:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:51.120 16:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:51.120 16:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:51.120 16:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:09:51.120 16:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.120 16:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:51.120 16:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:51.120 16:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:51.120 16:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:51.120 16:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.120 16:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.120 16:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.120 16:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.120 16:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.120 16:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.120 16:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.120 16:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.120 16:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.120 16:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.120 "name": "Existed_Raid", 00:09:51.120 "uuid": "d49700d6-5918-4245-8d50-2bd7707a60a8", 00:09:51.120 "strip_size_kb": 0, 00:09:51.120 "state": "online", 00:09:51.120 "raid_level": "raid1", 00:09:51.120 "superblock": true, 00:09:51.120 "num_base_bdevs": 2, 00:09:51.120 "num_base_bdevs_discovered": 1, 00:09:51.120 "num_base_bdevs_operational": 1, 00:09:51.120 "base_bdevs_list": [ 00:09:51.120 { 00:09:51.120 "name": null, 00:09:51.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.120 "is_configured": false, 00:09:51.120 "data_offset": 0, 00:09:51.120 "data_size": 63488 00:09:51.120 }, 00:09:51.120 { 00:09:51.120 "name": "BaseBdev2", 00:09:51.120 "uuid": "8b4e847a-3799-41b2-929f-8250d906afa9", 00:09:51.120 "is_configured": true, 00:09:51.120 "data_offset": 2048, 00:09:51.120 "data_size": 63488 00:09:51.120 } 00:09:51.120 ] 00:09:51.120 }' 00:09:51.120 16:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.120 16:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.380 16:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:51.380 16:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:51.380 16:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:51.380 16:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.380 16:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.380 16:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.380 16:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.640 16:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:51.640 16:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:51.640 16:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:51.640 16:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.641 16:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.641 [2024-11-05 16:24:04.505902] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:51.641 [2024-11-05 16:24:04.506014] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:51.641 [2024-11-05 16:24:04.605200] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:51.641 [2024-11-05 16:24:04.605260] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:51.641 [2024-11-05 16:24:04.605274] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:51.641 16:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.641 16:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:51.641 16:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:51.641 16:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.641 16:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.641 16:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:51.641 16:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.641 16:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.641 16:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:51.641 16:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:51.641 16:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:51.641 16:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63181 00:09:51.641 16:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 63181 ']' 00:09:51.641 16:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 63181 00:09:51.641 16:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:09:51.641 16:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:51.641 16:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63181 00:09:51.641 killing process with pid 63181 00:09:51.641 16:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:51.641 16:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:51.641 16:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63181' 00:09:51.641 16:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 63181 00:09:51.641 [2024-11-05 16:24:04.688711] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:51.641 16:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 63181 00:09:51.641 [2024-11-05 16:24:04.706869] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:53.021 16:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:53.021 00:09:53.021 real 0m5.216s 00:09:53.021 user 0m7.557s 00:09:53.021 sys 0m0.804s 00:09:53.021 16:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:53.021 ************************************ 00:09:53.021 END TEST raid_state_function_test_sb 00:09:53.021 ************************************ 00:09:53.021 16:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.021 16:24:05 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:09:53.021 16:24:05 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:53.021 16:24:05 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:53.021 16:24:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:53.021 ************************************ 00:09:53.021 START TEST raid_superblock_test 00:09:53.021 ************************************ 00:09:53.021 16:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:09:53.021 16:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:53.021 16:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:53.021 16:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:53.021 16:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:53.021 16:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:53.021 16:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:53.021 16:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:53.021 16:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:53.021 16:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:53.021 16:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:53.021 16:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:53.021 16:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:53.021 16:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:53.021 16:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:53.021 16:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:53.021 16:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63433 00:09:53.021 16:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:53.021 16:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63433 00:09:53.021 16:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 63433 ']' 00:09:53.021 16:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.021 16:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:53.021 16:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.021 16:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:53.021 16:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.021 [2024-11-05 16:24:06.037022] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:09:53.021 [2024-11-05 16:24:06.037239] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63433 ] 00:09:53.280 [2024-11-05 16:24:06.214858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.280 [2024-11-05 16:24:06.331726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.538 [2024-11-05 16:24:06.547390] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.538 [2024-11-05 16:24:06.547561] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:54.119 16:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:54.119 16:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:09:54.119 16:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:54.119 16:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:54.119 16:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:54.119 16:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:54.119 16:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:54.119 16:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:54.119 16:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:54.119 16:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:54.119 16:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:54.119 16:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.119 16:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.119 malloc1 00:09:54.119 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.119 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:54.119 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.119 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.119 [2024-11-05 16:24:07.014127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:54.119 [2024-11-05 16:24:07.014294] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.119 [2024-11-05 16:24:07.014333] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:54.119 [2024-11-05 16:24:07.014346] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.119 [2024-11-05 16:24:07.017173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.119 [2024-11-05 16:24:07.017228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:54.119 pt1 00:09:54.119 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.119 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:54.119 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:54.119 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:54.119 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:54.119 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:54.119 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:54.119 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:54.119 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:54.119 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:54.119 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.119 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.119 malloc2 00:09:54.119 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.119 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:54.119 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.119 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.119 [2024-11-05 16:24:07.072564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:54.119 [2024-11-05 16:24:07.072680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.119 [2024-11-05 16:24:07.072726] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:54.119 [2024-11-05 16:24:07.072764] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.119 [2024-11-05 16:24:07.075198] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.119 [2024-11-05 16:24:07.075276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:54.119 pt2 00:09:54.119 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.119 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:54.119 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:54.120 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:54.120 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.120 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.120 [2024-11-05 16:24:07.084626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:54.120 [2024-11-05 16:24:07.086997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:54.120 [2024-11-05 16:24:07.087254] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:54.120 [2024-11-05 16:24:07.087318] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:54.120 [2024-11-05 16:24:07.087657] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:54.120 [2024-11-05 16:24:07.087895] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:54.120 [2024-11-05 16:24:07.087953] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:54.120 [2024-11-05 16:24:07.088190] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.120 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.120 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:54.120 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:54.120 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:54.120 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:54.120 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:54.120 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:54.120 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.120 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.120 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.120 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.120 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:54.120 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.120 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.120 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.120 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.120 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.120 "name": "raid_bdev1", 00:09:54.120 "uuid": "abbf91cf-b8a3-42d8-bd2a-46191a424873", 00:09:54.120 "strip_size_kb": 0, 00:09:54.120 "state": "online", 00:09:54.120 "raid_level": "raid1", 00:09:54.120 "superblock": true, 00:09:54.120 "num_base_bdevs": 2, 00:09:54.120 "num_base_bdevs_discovered": 2, 00:09:54.120 "num_base_bdevs_operational": 2, 00:09:54.120 "base_bdevs_list": [ 00:09:54.120 { 00:09:54.120 "name": "pt1", 00:09:54.120 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:54.120 "is_configured": true, 00:09:54.120 "data_offset": 2048, 00:09:54.120 "data_size": 63488 00:09:54.120 }, 00:09:54.120 { 00:09:54.120 "name": "pt2", 00:09:54.120 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:54.120 "is_configured": true, 00:09:54.120 "data_offset": 2048, 00:09:54.120 "data_size": 63488 00:09:54.120 } 00:09:54.120 ] 00:09:54.120 }' 00:09:54.120 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.120 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.379 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:54.379 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:54.379 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:54.379 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:54.379 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:54.379 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:54.379 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:54.379 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:54.379 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.379 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.638 [2024-11-05 16:24:07.472279] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:54.638 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.638 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:54.638 "name": "raid_bdev1", 00:09:54.638 "aliases": [ 00:09:54.638 "abbf91cf-b8a3-42d8-bd2a-46191a424873" 00:09:54.638 ], 00:09:54.638 "product_name": "Raid Volume", 00:09:54.638 "block_size": 512, 00:09:54.638 "num_blocks": 63488, 00:09:54.638 "uuid": "abbf91cf-b8a3-42d8-bd2a-46191a424873", 00:09:54.638 "assigned_rate_limits": { 00:09:54.638 "rw_ios_per_sec": 0, 00:09:54.638 "rw_mbytes_per_sec": 0, 00:09:54.638 "r_mbytes_per_sec": 0, 00:09:54.638 "w_mbytes_per_sec": 0 00:09:54.638 }, 00:09:54.638 "claimed": false, 00:09:54.638 "zoned": false, 00:09:54.638 "supported_io_types": { 00:09:54.638 "read": true, 00:09:54.638 "write": true, 00:09:54.638 "unmap": false, 00:09:54.638 "flush": false, 00:09:54.638 "reset": true, 00:09:54.638 "nvme_admin": false, 00:09:54.638 "nvme_io": false, 00:09:54.638 "nvme_io_md": false, 00:09:54.638 "write_zeroes": true, 00:09:54.638 "zcopy": false, 00:09:54.638 "get_zone_info": false, 00:09:54.638 "zone_management": false, 00:09:54.638 "zone_append": false, 00:09:54.638 "compare": false, 00:09:54.638 "compare_and_write": false, 00:09:54.638 "abort": false, 00:09:54.638 "seek_hole": false, 00:09:54.638 "seek_data": false, 00:09:54.638 "copy": false, 00:09:54.638 "nvme_iov_md": false 00:09:54.638 }, 00:09:54.638 "memory_domains": [ 00:09:54.638 { 00:09:54.638 "dma_device_id": "system", 00:09:54.638 "dma_device_type": 1 00:09:54.638 }, 00:09:54.638 { 00:09:54.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.638 "dma_device_type": 2 00:09:54.638 }, 00:09:54.638 { 00:09:54.638 "dma_device_id": "system", 00:09:54.638 "dma_device_type": 1 00:09:54.638 }, 00:09:54.638 { 00:09:54.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.638 "dma_device_type": 2 00:09:54.638 } 00:09:54.638 ], 00:09:54.638 "driver_specific": { 00:09:54.638 "raid": { 00:09:54.638 "uuid": "abbf91cf-b8a3-42d8-bd2a-46191a424873", 00:09:54.638 "strip_size_kb": 0, 00:09:54.638 "state": "online", 00:09:54.638 "raid_level": "raid1", 00:09:54.638 "superblock": true, 00:09:54.638 "num_base_bdevs": 2, 00:09:54.638 "num_base_bdevs_discovered": 2, 00:09:54.638 "num_base_bdevs_operational": 2, 00:09:54.638 "base_bdevs_list": [ 00:09:54.638 { 00:09:54.638 "name": "pt1", 00:09:54.638 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:54.638 "is_configured": true, 00:09:54.638 "data_offset": 2048, 00:09:54.638 "data_size": 63488 00:09:54.638 }, 00:09:54.638 { 00:09:54.638 "name": "pt2", 00:09:54.638 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:54.638 "is_configured": true, 00:09:54.638 "data_offset": 2048, 00:09:54.638 "data_size": 63488 00:09:54.638 } 00:09:54.638 ] 00:09:54.638 } 00:09:54.638 } 00:09:54.638 }' 00:09:54.638 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:54.638 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:54.638 pt2' 00:09:54.638 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.638 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:54.638 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:54.638 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.638 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:54.638 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.638 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.638 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.638 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:54.638 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:54.638 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:54.638 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.638 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:54.638 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.638 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.638 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.638 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:54.638 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:54.638 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:54.638 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:54.638 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.638 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.638 [2024-11-05 16:24:07.708049] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:54.638 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.896 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=abbf91cf-b8a3-42d8-bd2a-46191a424873 00:09:54.896 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z abbf91cf-b8a3-42d8-bd2a-46191a424873 ']' 00:09:54.896 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:54.896 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.896 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.896 [2024-11-05 16:24:07.743662] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:54.896 [2024-11-05 16:24:07.743695] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:54.896 [2024-11-05 16:24:07.743811] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:54.896 [2024-11-05 16:24:07.743892] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:54.896 [2024-11-05 16:24:07.743909] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:54.896 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.896 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:54.896 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.896 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.896 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.896 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.896 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:54.896 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:54.896 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:54.896 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:54.896 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.896 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.896 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.896 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:54.896 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:54.896 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.896 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.896 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.896 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:54.896 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:54.896 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.896 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.897 [2024-11-05 16:24:07.863742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:54.897 [2024-11-05 16:24:07.866169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:54.897 [2024-11-05 16:24:07.866317] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:54.897 [2024-11-05 16:24:07.866441] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:54.897 [2024-11-05 16:24:07.866530] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:54.897 [2024-11-05 16:24:07.866582] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:54.897 request: 00:09:54.897 { 00:09:54.897 "name": "raid_bdev1", 00:09:54.897 "raid_level": "raid1", 00:09:54.897 "base_bdevs": [ 00:09:54.897 "malloc1", 00:09:54.897 "malloc2" 00:09:54.897 ], 00:09:54.897 "superblock": false, 00:09:54.897 "method": "bdev_raid_create", 00:09:54.897 "req_id": 1 00:09:54.897 } 00:09:54.897 Got JSON-RPC error response 00:09:54.897 response: 00:09:54.897 { 00:09:54.897 "code": -17, 00:09:54.897 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:54.897 } 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.897 [2024-11-05 16:24:07.919713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:54.897 [2024-11-05 16:24:07.919800] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.897 [2024-11-05 16:24:07.919825] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:54.897 [2024-11-05 16:24:07.919841] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.897 [2024-11-05 16:24:07.922640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.897 [2024-11-05 16:24:07.922731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:54.897 [2024-11-05 16:24:07.922854] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:54.897 [2024-11-05 16:24:07.922939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:54.897 pt1 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.897 "name": "raid_bdev1", 00:09:54.897 "uuid": "abbf91cf-b8a3-42d8-bd2a-46191a424873", 00:09:54.897 "strip_size_kb": 0, 00:09:54.897 "state": "configuring", 00:09:54.897 "raid_level": "raid1", 00:09:54.897 "superblock": true, 00:09:54.897 "num_base_bdevs": 2, 00:09:54.897 "num_base_bdevs_discovered": 1, 00:09:54.897 "num_base_bdevs_operational": 2, 00:09:54.897 "base_bdevs_list": [ 00:09:54.897 { 00:09:54.897 "name": "pt1", 00:09:54.897 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:54.897 "is_configured": true, 00:09:54.897 "data_offset": 2048, 00:09:54.897 "data_size": 63488 00:09:54.897 }, 00:09:54.897 { 00:09:54.897 "name": null, 00:09:54.897 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:54.897 "is_configured": false, 00:09:54.897 "data_offset": 2048, 00:09:54.897 "data_size": 63488 00:09:54.897 } 00:09:54.897 ] 00:09:54.897 }' 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.897 16:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.465 16:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:55.465 16:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:55.465 16:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:55.465 16:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:55.465 16:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.465 16:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.465 [2024-11-05 16:24:08.363031] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:55.465 [2024-11-05 16:24:08.363111] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.465 [2024-11-05 16:24:08.363135] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:55.465 [2024-11-05 16:24:08.363148] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.465 [2024-11-05 16:24:08.363677] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.465 [2024-11-05 16:24:08.363701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:55.465 [2024-11-05 16:24:08.363796] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:55.465 [2024-11-05 16:24:08.363825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:55.465 [2024-11-05 16:24:08.363957] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:55.465 [2024-11-05 16:24:08.363970] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:55.465 [2024-11-05 16:24:08.364226] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:55.465 [2024-11-05 16:24:08.364391] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:55.465 [2024-11-05 16:24:08.364402] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:55.465 [2024-11-05 16:24:08.364686] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.465 pt2 00:09:55.465 16:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.465 16:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:55.465 16:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:55.465 16:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:55.465 16:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:55.465 16:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:55.465 16:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:55.465 16:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:55.465 16:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:55.465 16:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.466 16:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.466 16:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.466 16:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.466 16:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.466 16:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:55.466 16:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.466 16:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.466 16:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.466 16:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.466 "name": "raid_bdev1", 00:09:55.466 "uuid": "abbf91cf-b8a3-42d8-bd2a-46191a424873", 00:09:55.466 "strip_size_kb": 0, 00:09:55.466 "state": "online", 00:09:55.466 "raid_level": "raid1", 00:09:55.466 "superblock": true, 00:09:55.466 "num_base_bdevs": 2, 00:09:55.466 "num_base_bdevs_discovered": 2, 00:09:55.466 "num_base_bdevs_operational": 2, 00:09:55.466 "base_bdevs_list": [ 00:09:55.466 { 00:09:55.466 "name": "pt1", 00:09:55.466 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:55.466 "is_configured": true, 00:09:55.466 "data_offset": 2048, 00:09:55.466 "data_size": 63488 00:09:55.466 }, 00:09:55.466 { 00:09:55.466 "name": "pt2", 00:09:55.466 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:55.466 "is_configured": true, 00:09:55.466 "data_offset": 2048, 00:09:55.466 "data_size": 63488 00:09:55.466 } 00:09:55.466 ] 00:09:55.466 }' 00:09:55.466 16:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.466 16:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.034 16:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:56.034 16:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:56.034 16:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:56.034 16:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:56.034 16:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:56.034 16:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:56.034 16:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:56.034 16:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:56.034 16:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.034 16:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.034 [2024-11-05 16:24:08.854501] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:56.034 16:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.034 16:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:56.034 "name": "raid_bdev1", 00:09:56.034 "aliases": [ 00:09:56.034 "abbf91cf-b8a3-42d8-bd2a-46191a424873" 00:09:56.034 ], 00:09:56.034 "product_name": "Raid Volume", 00:09:56.034 "block_size": 512, 00:09:56.034 "num_blocks": 63488, 00:09:56.034 "uuid": "abbf91cf-b8a3-42d8-bd2a-46191a424873", 00:09:56.034 "assigned_rate_limits": { 00:09:56.034 "rw_ios_per_sec": 0, 00:09:56.034 "rw_mbytes_per_sec": 0, 00:09:56.034 "r_mbytes_per_sec": 0, 00:09:56.034 "w_mbytes_per_sec": 0 00:09:56.034 }, 00:09:56.034 "claimed": false, 00:09:56.034 "zoned": false, 00:09:56.034 "supported_io_types": { 00:09:56.034 "read": true, 00:09:56.034 "write": true, 00:09:56.034 "unmap": false, 00:09:56.034 "flush": false, 00:09:56.034 "reset": true, 00:09:56.034 "nvme_admin": false, 00:09:56.034 "nvme_io": false, 00:09:56.034 "nvme_io_md": false, 00:09:56.034 "write_zeroes": true, 00:09:56.034 "zcopy": false, 00:09:56.034 "get_zone_info": false, 00:09:56.034 "zone_management": false, 00:09:56.034 "zone_append": false, 00:09:56.034 "compare": false, 00:09:56.034 "compare_and_write": false, 00:09:56.034 "abort": false, 00:09:56.034 "seek_hole": false, 00:09:56.034 "seek_data": false, 00:09:56.034 "copy": false, 00:09:56.034 "nvme_iov_md": false 00:09:56.034 }, 00:09:56.034 "memory_domains": [ 00:09:56.034 { 00:09:56.034 "dma_device_id": "system", 00:09:56.034 "dma_device_type": 1 00:09:56.034 }, 00:09:56.034 { 00:09:56.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.034 "dma_device_type": 2 00:09:56.034 }, 00:09:56.034 { 00:09:56.034 "dma_device_id": "system", 00:09:56.034 "dma_device_type": 1 00:09:56.034 }, 00:09:56.034 { 00:09:56.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.034 "dma_device_type": 2 00:09:56.034 } 00:09:56.034 ], 00:09:56.034 "driver_specific": { 00:09:56.034 "raid": { 00:09:56.034 "uuid": "abbf91cf-b8a3-42d8-bd2a-46191a424873", 00:09:56.034 "strip_size_kb": 0, 00:09:56.034 "state": "online", 00:09:56.034 "raid_level": "raid1", 00:09:56.034 "superblock": true, 00:09:56.034 "num_base_bdevs": 2, 00:09:56.034 "num_base_bdevs_discovered": 2, 00:09:56.034 "num_base_bdevs_operational": 2, 00:09:56.034 "base_bdevs_list": [ 00:09:56.034 { 00:09:56.034 "name": "pt1", 00:09:56.034 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:56.034 "is_configured": true, 00:09:56.034 "data_offset": 2048, 00:09:56.034 "data_size": 63488 00:09:56.034 }, 00:09:56.034 { 00:09:56.034 "name": "pt2", 00:09:56.034 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:56.034 "is_configured": true, 00:09:56.034 "data_offset": 2048, 00:09:56.034 "data_size": 63488 00:09:56.034 } 00:09:56.034 ] 00:09:56.034 } 00:09:56.034 } 00:09:56.034 }' 00:09:56.034 16:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:56.034 16:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:56.034 pt2' 00:09:56.034 16:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.034 16:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:56.034 16:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.034 16:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:56.034 16:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.034 16:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.034 16:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.034 16:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.035 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.035 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.035 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.035 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:56.035 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.035 16:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.035 16:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.035 16:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.035 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.035 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.035 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:56.035 16:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.035 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:56.035 16:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.035 [2024-11-05 16:24:09.066096] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:56.035 16:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.035 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' abbf91cf-b8a3-42d8-bd2a-46191a424873 '!=' abbf91cf-b8a3-42d8-bd2a-46191a424873 ']' 00:09:56.035 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:56.035 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:56.035 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:56.035 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:56.035 16:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.035 16:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.035 [2024-11-05 16:24:09.113815] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:56.035 16:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.035 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:56.035 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:56.035 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:56.035 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:56.035 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:56.035 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:56.035 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.035 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.035 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.035 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.294 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.294 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:56.294 16:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.294 16:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.294 16:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.294 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.294 "name": "raid_bdev1", 00:09:56.294 "uuid": "abbf91cf-b8a3-42d8-bd2a-46191a424873", 00:09:56.294 "strip_size_kb": 0, 00:09:56.294 "state": "online", 00:09:56.294 "raid_level": "raid1", 00:09:56.294 "superblock": true, 00:09:56.294 "num_base_bdevs": 2, 00:09:56.294 "num_base_bdevs_discovered": 1, 00:09:56.294 "num_base_bdevs_operational": 1, 00:09:56.294 "base_bdevs_list": [ 00:09:56.294 { 00:09:56.294 "name": null, 00:09:56.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.294 "is_configured": false, 00:09:56.294 "data_offset": 0, 00:09:56.294 "data_size": 63488 00:09:56.294 }, 00:09:56.294 { 00:09:56.294 "name": "pt2", 00:09:56.294 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:56.294 "is_configured": true, 00:09:56.294 "data_offset": 2048, 00:09:56.294 "data_size": 63488 00:09:56.294 } 00:09:56.294 ] 00:09:56.294 }' 00:09:56.294 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.294 16:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.555 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:56.555 16:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.555 16:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.555 [2024-11-05 16:24:09.576976] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:56.555 [2024-11-05 16:24:09.577072] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:56.555 [2024-11-05 16:24:09.577208] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:56.555 [2024-11-05 16:24:09.577312] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:56.555 [2024-11-05 16:24:09.577374] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:56.555 16:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.555 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.555 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:56.555 16:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.555 16:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.555 16:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.555 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:56.555 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:56.555 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:56.555 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:56.555 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:56.555 16:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.555 16:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.814 16:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.814 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:56.814 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:56.814 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:56.814 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:56.814 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:09:56.814 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:56.814 16:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.814 16:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.814 [2024-11-05 16:24:09.656871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:56.814 [2024-11-05 16:24:09.657013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.814 [2024-11-05 16:24:09.657075] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:56.814 [2024-11-05 16:24:09.657127] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.814 [2024-11-05 16:24:09.659757] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.814 [2024-11-05 16:24:09.659861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:56.814 [2024-11-05 16:24:09.660031] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:56.814 [2024-11-05 16:24:09.660158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:56.814 [2024-11-05 16:24:09.660353] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:56.814 [2024-11-05 16:24:09.660413] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:56.814 [2024-11-05 16:24:09.660785] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:56.814 [2024-11-05 16:24:09.661030] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:56.815 [2024-11-05 16:24:09.661082] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:56.815 [2024-11-05 16:24:09.661363] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:56.815 pt2 00:09:56.815 16:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.815 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:56.815 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:56.815 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:56.815 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:56.815 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:56.815 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:56.815 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.815 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.815 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.815 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.815 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.815 16:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.815 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:56.815 16:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.815 16:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.815 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.815 "name": "raid_bdev1", 00:09:56.815 "uuid": "abbf91cf-b8a3-42d8-bd2a-46191a424873", 00:09:56.815 "strip_size_kb": 0, 00:09:56.815 "state": "online", 00:09:56.815 "raid_level": "raid1", 00:09:56.815 "superblock": true, 00:09:56.815 "num_base_bdevs": 2, 00:09:56.815 "num_base_bdevs_discovered": 1, 00:09:56.815 "num_base_bdevs_operational": 1, 00:09:56.815 "base_bdevs_list": [ 00:09:56.815 { 00:09:56.815 "name": null, 00:09:56.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.815 "is_configured": false, 00:09:56.815 "data_offset": 2048, 00:09:56.815 "data_size": 63488 00:09:56.815 }, 00:09:56.815 { 00:09:56.815 "name": "pt2", 00:09:56.815 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:56.815 "is_configured": true, 00:09:56.815 "data_offset": 2048, 00:09:56.815 "data_size": 63488 00:09:56.815 } 00:09:56.815 ] 00:09:56.815 }' 00:09:56.815 16:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.815 16:24:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.074 16:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:57.074 16:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.074 16:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.074 [2024-11-05 16:24:10.132502] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:57.074 [2024-11-05 16:24:10.132558] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:57.074 [2024-11-05 16:24:10.132653] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:57.074 [2024-11-05 16:24:10.132715] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:57.074 [2024-11-05 16:24:10.132726] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:57.074 16:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.074 16:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.074 16:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.074 16:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.074 16:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:57.074 16:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.333 16:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:57.333 16:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:57.333 16:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:09:57.333 16:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:57.333 16:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.333 16:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.333 [2024-11-05 16:24:10.196414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:57.333 [2024-11-05 16:24:10.196514] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.333 [2024-11-05 16:24:10.196548] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:09:57.333 [2024-11-05 16:24:10.196559] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.333 [2024-11-05 16:24:10.199108] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.333 [2024-11-05 16:24:10.199150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:57.333 [2024-11-05 16:24:10.199258] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:57.333 [2024-11-05 16:24:10.199314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:57.333 [2024-11-05 16:24:10.199465] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:57.333 [2024-11-05 16:24:10.199476] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:57.333 [2024-11-05 16:24:10.199496] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:09:57.333 [2024-11-05 16:24:10.199604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:57.333 [2024-11-05 16:24:10.199718] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:09:57.333 [2024-11-05 16:24:10.199728] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:57.333 [2024-11-05 16:24:10.200013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:57.333 [2024-11-05 16:24:10.200185] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:09:57.333 [2024-11-05 16:24:10.200200] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:09:57.333 [2024-11-05 16:24:10.200431] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:57.333 pt1 00:09:57.333 16:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.333 16:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:09:57.333 16:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:57.333 16:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:57.333 16:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:57.334 16:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:57.334 16:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:57.334 16:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:57.334 16:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.334 16:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.334 16:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.334 16:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.334 16:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.334 16:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.334 16:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.334 16:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:57.334 16:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.334 16:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.334 "name": "raid_bdev1", 00:09:57.334 "uuid": "abbf91cf-b8a3-42d8-bd2a-46191a424873", 00:09:57.334 "strip_size_kb": 0, 00:09:57.334 "state": "online", 00:09:57.334 "raid_level": "raid1", 00:09:57.334 "superblock": true, 00:09:57.334 "num_base_bdevs": 2, 00:09:57.334 "num_base_bdevs_discovered": 1, 00:09:57.334 "num_base_bdevs_operational": 1, 00:09:57.334 "base_bdevs_list": [ 00:09:57.334 { 00:09:57.334 "name": null, 00:09:57.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.334 "is_configured": false, 00:09:57.334 "data_offset": 2048, 00:09:57.334 "data_size": 63488 00:09:57.334 }, 00:09:57.334 { 00:09:57.334 "name": "pt2", 00:09:57.334 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:57.334 "is_configured": true, 00:09:57.334 "data_offset": 2048, 00:09:57.334 "data_size": 63488 00:09:57.334 } 00:09:57.334 ] 00:09:57.334 }' 00:09:57.334 16:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.334 16:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.593 16:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:57.593 16:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:57.593 16:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.593 16:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.593 16:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.850 16:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:57.850 16:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:57.850 16:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:57.850 16:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.850 16:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.850 [2024-11-05 16:24:10.695871] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:57.850 16:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.850 16:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' abbf91cf-b8a3-42d8-bd2a-46191a424873 '!=' abbf91cf-b8a3-42d8-bd2a-46191a424873 ']' 00:09:57.850 16:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63433 00:09:57.850 16:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 63433 ']' 00:09:57.850 16:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 63433 00:09:57.850 16:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:09:57.850 16:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:57.850 16:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63433 00:09:57.850 killing process with pid 63433 00:09:57.850 16:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:57.850 16:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:57.850 16:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63433' 00:09:57.850 16:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 63433 00:09:57.850 [2024-11-05 16:24:10.763171] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:57.850 [2024-11-05 16:24:10.763278] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:57.850 16:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 63433 00:09:57.850 [2024-11-05 16:24:10.763334] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:57.850 [2024-11-05 16:24:10.763351] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:09:58.109 [2024-11-05 16:24:10.981250] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:59.485 16:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:59.485 00:09:59.485 real 0m6.266s 00:09:59.485 user 0m9.515s 00:09:59.485 sys 0m0.985s 00:09:59.485 ************************************ 00:09:59.485 END TEST raid_superblock_test 00:09:59.485 ************************************ 00:09:59.485 16:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:59.485 16:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.485 16:24:12 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:09:59.485 16:24:12 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:59.485 16:24:12 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:59.485 16:24:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:59.485 ************************************ 00:09:59.485 START TEST raid_read_error_test 00:09:59.485 ************************************ 00:09:59.485 16:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 2 read 00:09:59.485 16:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:59.485 16:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:59.485 16:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:59.485 16:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:59.485 16:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:59.485 16:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:59.485 16:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:59.485 16:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:59.485 16:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:59.485 16:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:59.485 16:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:59.485 16:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:59.485 16:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:59.485 16:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:59.485 16:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:59.485 16:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:59.485 16:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:59.485 16:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:59.485 16:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:59.485 16:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:59.485 16:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:59.485 16:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.3HSTgDywmA 00:09:59.485 16:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63771 00:09:59.485 16:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:59.485 16:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63771 00:09:59.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.485 16:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 63771 ']' 00:09:59.485 16:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.485 16:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:59.485 16:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.485 16:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:59.485 16:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.485 [2024-11-05 16:24:12.385465] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:09:59.485 [2024-11-05 16:24:12.385727] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63771 ] 00:09:59.485 [2024-11-05 16:24:12.563880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.744 [2024-11-05 16:24:12.689606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.004 [2024-11-05 16:24:12.914207] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:00.004 [2024-11-05 16:24:12.914264] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:00.263 16:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:00.263 16:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:10:00.263 16:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:00.263 16:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:00.263 16:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.263 16:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.263 BaseBdev1_malloc 00:10:00.263 16:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.263 16:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:00.263 16:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.263 16:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.263 true 00:10:00.263 16:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.263 16:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:00.263 16:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.263 16:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.263 [2024-11-05 16:24:13.349232] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:00.263 [2024-11-05 16:24:13.349292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.263 [2024-11-05 16:24:13.349315] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:00.263 [2024-11-05 16:24:13.349327] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.263 [2024-11-05 16:24:13.351774] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.263 [2024-11-05 16:24:13.351855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:00.263 BaseBdev1 00:10:00.523 16:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.523 16:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:00.524 16:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:00.524 16:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.524 16:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.524 BaseBdev2_malloc 00:10:00.524 16:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.524 16:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:00.524 16:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.524 16:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.524 true 00:10:00.524 16:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.524 16:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:00.524 16:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.524 16:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.524 [2024-11-05 16:24:13.418461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:00.524 [2024-11-05 16:24:13.418585] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.524 [2024-11-05 16:24:13.418611] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:00.524 [2024-11-05 16:24:13.418624] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.524 [2024-11-05 16:24:13.421012] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.524 [2024-11-05 16:24:13.421056] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:00.524 BaseBdev2 00:10:00.524 16:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.524 16:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:00.524 16:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.524 16:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.524 [2024-11-05 16:24:13.430509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:00.524 [2024-11-05 16:24:13.432499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:00.524 [2024-11-05 16:24:13.432706] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:00.524 [2024-11-05 16:24:13.432742] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:00.524 [2024-11-05 16:24:13.433001] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:00.524 [2024-11-05 16:24:13.433225] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:00.524 [2024-11-05 16:24:13.433237] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:00.524 [2024-11-05 16:24:13.433397] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:00.524 16:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.524 16:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:00.524 16:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:00.524 16:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:00.524 16:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.524 16:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.524 16:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:00.524 16:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.524 16:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.524 16:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.524 16:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.524 16:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.524 16:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:00.524 16:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.524 16:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.524 16:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.524 16:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.524 "name": "raid_bdev1", 00:10:00.524 "uuid": "e69402e1-a36d-49b6-bb13-bf2287f04256", 00:10:00.524 "strip_size_kb": 0, 00:10:00.524 "state": "online", 00:10:00.524 "raid_level": "raid1", 00:10:00.524 "superblock": true, 00:10:00.524 "num_base_bdevs": 2, 00:10:00.524 "num_base_bdevs_discovered": 2, 00:10:00.524 "num_base_bdevs_operational": 2, 00:10:00.524 "base_bdevs_list": [ 00:10:00.524 { 00:10:00.524 "name": "BaseBdev1", 00:10:00.524 "uuid": "4fe36974-57fa-51ed-95fa-8c21bcd8e333", 00:10:00.524 "is_configured": true, 00:10:00.524 "data_offset": 2048, 00:10:00.524 "data_size": 63488 00:10:00.524 }, 00:10:00.524 { 00:10:00.524 "name": "BaseBdev2", 00:10:00.524 "uuid": "7cb15cc7-5ee5-53dd-9d5e-c9b1ba7e9e87", 00:10:00.524 "is_configured": true, 00:10:00.524 "data_offset": 2048, 00:10:00.524 "data_size": 63488 00:10:00.524 } 00:10:00.524 ] 00:10:00.524 }' 00:10:00.524 16:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.524 16:24:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.092 16:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:01.092 16:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:01.092 [2024-11-05 16:24:14.022927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:02.028 16:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:02.028 16:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.028 16:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.028 16:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.028 16:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:02.028 16:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:02.028 16:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:02.028 16:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:02.028 16:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:02.028 16:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:02.028 16:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:02.028 16:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.028 16:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.028 16:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:02.028 16:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.028 16:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.028 16:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.028 16:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.028 16:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.028 16:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:02.028 16:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.028 16:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.028 16:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.028 16:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.028 "name": "raid_bdev1", 00:10:02.028 "uuid": "e69402e1-a36d-49b6-bb13-bf2287f04256", 00:10:02.028 "strip_size_kb": 0, 00:10:02.028 "state": "online", 00:10:02.028 "raid_level": "raid1", 00:10:02.028 "superblock": true, 00:10:02.028 "num_base_bdevs": 2, 00:10:02.028 "num_base_bdevs_discovered": 2, 00:10:02.028 "num_base_bdevs_operational": 2, 00:10:02.028 "base_bdevs_list": [ 00:10:02.028 { 00:10:02.028 "name": "BaseBdev1", 00:10:02.028 "uuid": "4fe36974-57fa-51ed-95fa-8c21bcd8e333", 00:10:02.028 "is_configured": true, 00:10:02.028 "data_offset": 2048, 00:10:02.028 "data_size": 63488 00:10:02.028 }, 00:10:02.028 { 00:10:02.029 "name": "BaseBdev2", 00:10:02.029 "uuid": "7cb15cc7-5ee5-53dd-9d5e-c9b1ba7e9e87", 00:10:02.029 "is_configured": true, 00:10:02.029 "data_offset": 2048, 00:10:02.029 "data_size": 63488 00:10:02.029 } 00:10:02.029 ] 00:10:02.029 }' 00:10:02.029 16:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.029 16:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.596 16:24:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:02.596 16:24:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.596 16:24:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.596 [2024-11-05 16:24:15.383319] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:02.596 [2024-11-05 16:24:15.383428] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:02.596 [2024-11-05 16:24:15.386488] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:02.596 [2024-11-05 16:24:15.386594] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:02.596 [2024-11-05 16:24:15.386708] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:02.596 [2024-11-05 16:24:15.386761] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:02.596 { 00:10:02.596 "results": [ 00:10:02.596 { 00:10:02.596 "job": "raid_bdev1", 00:10:02.596 "core_mask": "0x1", 00:10:02.596 "workload": "randrw", 00:10:02.596 "percentage": 50, 00:10:02.596 "status": "finished", 00:10:02.596 "queue_depth": 1, 00:10:02.596 "io_size": 131072, 00:10:02.596 "runtime": 1.361158, 00:10:02.596 "iops": 15195.884680544066, 00:10:02.596 "mibps": 1899.4855850680083, 00:10:02.596 "io_failed": 0, 00:10:02.596 "io_timeout": 0, 00:10:02.596 "avg_latency_us": 62.56051763318946, 00:10:02.596 "min_latency_us": 24.929257641921396, 00:10:02.596 "max_latency_us": 1795.8008733624454 00:10:02.596 } 00:10:02.596 ], 00:10:02.596 "core_count": 1 00:10:02.596 } 00:10:02.596 16:24:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.596 16:24:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63771 00:10:02.596 16:24:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 63771 ']' 00:10:02.596 16:24:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 63771 00:10:02.596 16:24:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:10:02.596 16:24:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:02.596 16:24:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63771 00:10:02.596 killing process with pid 63771 00:10:02.596 16:24:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:02.596 16:24:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:02.596 16:24:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63771' 00:10:02.596 16:24:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 63771 00:10:02.596 [2024-11-05 16:24:15.435719] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:02.596 16:24:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 63771 00:10:02.596 [2024-11-05 16:24:15.599300] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:03.974 16:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.3HSTgDywmA 00:10:03.974 16:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:03.974 16:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:03.974 16:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:03.974 16:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:03.974 ************************************ 00:10:03.974 END TEST raid_read_error_test 00:10:03.974 ************************************ 00:10:03.974 16:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:03.974 16:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:03.974 16:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:03.974 00:10:03.974 real 0m4.767s 00:10:03.974 user 0m5.718s 00:10:03.974 sys 0m0.580s 00:10:03.974 16:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:03.974 16:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.234 16:24:17 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:10:04.234 16:24:17 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:04.234 16:24:17 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:04.234 16:24:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:04.234 ************************************ 00:10:04.234 START TEST raid_write_error_test 00:10:04.234 ************************************ 00:10:04.234 16:24:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 2 write 00:10:04.234 16:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:04.234 16:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:04.234 16:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:04.234 16:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:04.234 16:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:04.234 16:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:04.234 16:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:04.234 16:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:04.234 16:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:04.234 16:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:04.234 16:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:04.234 16:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:04.234 16:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:04.234 16:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:04.234 16:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:04.234 16:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:04.234 16:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:04.234 16:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:04.234 16:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:04.234 16:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:04.234 16:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:04.234 16:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.7B95wyDsfI 00:10:04.234 16:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63915 00:10:04.234 16:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63915 00:10:04.234 16:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:04.234 16:24:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 63915 ']' 00:10:04.234 16:24:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.234 16:24:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:04.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.234 16:24:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.234 16:24:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:04.234 16:24:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.234 [2024-11-05 16:24:17.228230] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:10:04.234 [2024-11-05 16:24:17.228358] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63915 ] 00:10:04.494 [2024-11-05 16:24:17.387499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.494 [2024-11-05 16:24:17.529896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.753 [2024-11-05 16:24:17.773075] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:04.753 [2024-11-05 16:24:17.773190] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:05.013 16:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:05.013 16:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:10:05.013 16:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:05.013 16:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:05.013 16:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.013 16:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.273 BaseBdev1_malloc 00:10:05.273 16:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.273 16:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:05.273 16:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.273 16:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.273 true 00:10:05.273 16:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.273 16:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:05.273 16:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.273 16:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.273 [2024-11-05 16:24:18.141286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:05.273 [2024-11-05 16:24:18.141391] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.273 [2024-11-05 16:24:18.141424] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:05.273 [2024-11-05 16:24:18.141440] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.273 [2024-11-05 16:24:18.144175] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.273 [2024-11-05 16:24:18.144229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:05.273 BaseBdev1 00:10:05.273 16:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.273 16:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:05.273 16:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:05.273 16:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.273 16:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.273 BaseBdev2_malloc 00:10:05.273 16:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.273 16:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:05.273 16:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.273 16:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.273 true 00:10:05.273 16:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.273 16:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:05.273 16:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.273 16:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.273 [2024-11-05 16:24:18.217723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:05.273 [2024-11-05 16:24:18.217827] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.273 [2024-11-05 16:24:18.217855] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:05.273 [2024-11-05 16:24:18.217869] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.273 [2024-11-05 16:24:18.220551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.273 [2024-11-05 16:24:18.220697] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:05.273 BaseBdev2 00:10:05.273 16:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.273 16:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:05.273 16:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.273 16:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.273 [2024-11-05 16:24:18.229783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:05.273 [2024-11-05 16:24:18.232026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:05.273 [2024-11-05 16:24:18.232421] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:05.273 [2024-11-05 16:24:18.232459] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:05.273 [2024-11-05 16:24:18.232811] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:05.273 [2024-11-05 16:24:18.233057] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:05.273 [2024-11-05 16:24:18.233080] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:05.274 [2024-11-05 16:24:18.233313] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:05.274 16:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.274 16:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:05.274 16:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:05.274 16:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:05.274 16:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.274 16:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.274 16:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:05.274 16:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.274 16:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.274 16:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.274 16:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.274 16:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:05.274 16:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.274 16:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.274 16:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.274 16:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.274 16:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.274 "name": "raid_bdev1", 00:10:05.274 "uuid": "2b858580-d9e1-43ac-8af9-485cf409d4a3", 00:10:05.274 "strip_size_kb": 0, 00:10:05.274 "state": "online", 00:10:05.274 "raid_level": "raid1", 00:10:05.274 "superblock": true, 00:10:05.274 "num_base_bdevs": 2, 00:10:05.274 "num_base_bdevs_discovered": 2, 00:10:05.274 "num_base_bdevs_operational": 2, 00:10:05.274 "base_bdevs_list": [ 00:10:05.274 { 00:10:05.274 "name": "BaseBdev1", 00:10:05.274 "uuid": "837e3cb9-74b7-5cbe-92df-f4284badd628", 00:10:05.274 "is_configured": true, 00:10:05.274 "data_offset": 2048, 00:10:05.274 "data_size": 63488 00:10:05.274 }, 00:10:05.274 { 00:10:05.274 "name": "BaseBdev2", 00:10:05.274 "uuid": "be67cc65-fd0b-5638-a1b7-814a942bafea", 00:10:05.274 "is_configured": true, 00:10:05.274 "data_offset": 2048, 00:10:05.274 "data_size": 63488 00:10:05.274 } 00:10:05.274 ] 00:10:05.274 }' 00:10:05.274 16:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.274 16:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.849 16:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:05.849 16:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:05.849 [2024-11-05 16:24:18.842445] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:06.798 16:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:06.798 16:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.798 16:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.798 [2024-11-05 16:24:19.726085] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:06.799 [2024-11-05 16:24:19.726309] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:06.799 [2024-11-05 16:24:19.726603] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:10:06.799 16:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.799 16:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:06.799 16:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:06.799 16:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:06.799 16:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:10:06.799 16:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:06.799 16:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:06.799 16:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:06.799 16:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.799 16:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.799 16:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:06.799 16:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.799 16:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.799 16:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.799 16:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.799 16:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.799 16:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:06.799 16:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.799 16:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.799 16:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.799 16:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.799 "name": "raid_bdev1", 00:10:06.799 "uuid": "2b858580-d9e1-43ac-8af9-485cf409d4a3", 00:10:06.799 "strip_size_kb": 0, 00:10:06.799 "state": "online", 00:10:06.799 "raid_level": "raid1", 00:10:06.799 "superblock": true, 00:10:06.799 "num_base_bdevs": 2, 00:10:06.799 "num_base_bdevs_discovered": 1, 00:10:06.799 "num_base_bdevs_operational": 1, 00:10:06.799 "base_bdevs_list": [ 00:10:06.799 { 00:10:06.799 "name": null, 00:10:06.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.799 "is_configured": false, 00:10:06.799 "data_offset": 0, 00:10:06.799 "data_size": 63488 00:10:06.799 }, 00:10:06.799 { 00:10:06.799 "name": "BaseBdev2", 00:10:06.799 "uuid": "be67cc65-fd0b-5638-a1b7-814a942bafea", 00:10:06.799 "is_configured": true, 00:10:06.799 "data_offset": 2048, 00:10:06.799 "data_size": 63488 00:10:06.799 } 00:10:06.799 ] 00:10:06.799 }' 00:10:06.799 16:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.799 16:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.368 16:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:07.368 16:24:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.368 16:24:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.368 [2024-11-05 16:24:20.179960] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:07.368 [2024-11-05 16:24:20.180013] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:07.368 [2024-11-05 16:24:20.182881] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:07.368 [2024-11-05 16:24:20.182998] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:07.368 [2024-11-05 16:24:20.183134] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:07.368 [2024-11-05 16:24:20.183211] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:07.368 { 00:10:07.368 "results": [ 00:10:07.368 { 00:10:07.368 "job": "raid_bdev1", 00:10:07.368 "core_mask": "0x1", 00:10:07.368 "workload": "randrw", 00:10:07.368 "percentage": 50, 00:10:07.368 "status": "finished", 00:10:07.368 "queue_depth": 1, 00:10:07.368 "io_size": 131072, 00:10:07.368 "runtime": 1.337632, 00:10:07.368 "iops": 14375.403698476113, 00:10:07.368 "mibps": 1796.9254623095142, 00:10:07.368 "io_failed": 0, 00:10:07.368 "io_timeout": 0, 00:10:07.368 "avg_latency_us": 66.65626159178696, 00:10:07.368 "min_latency_us": 23.923144104803495, 00:10:07.368 "max_latency_us": 1452.380786026201 00:10:07.368 } 00:10:07.368 ], 00:10:07.368 "core_count": 1 00:10:07.368 } 00:10:07.368 16:24:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.368 16:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63915 00:10:07.368 16:24:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 63915 ']' 00:10:07.368 16:24:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 63915 00:10:07.368 16:24:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:10:07.368 16:24:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:07.368 16:24:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63915 00:10:07.368 16:24:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:07.368 killing process with pid 63915 00:10:07.368 16:24:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:07.368 16:24:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63915' 00:10:07.368 16:24:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 63915 00:10:07.368 16:24:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 63915 00:10:07.368 [2024-11-05 16:24:20.228450] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:07.368 [2024-11-05 16:24:20.382374] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:08.748 16:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.7B95wyDsfI 00:10:08.748 16:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:08.748 16:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:08.748 16:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:08.748 16:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:08.748 16:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:08.748 16:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:08.748 16:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:08.748 ************************************ 00:10:08.748 END TEST raid_write_error_test 00:10:08.748 ************************************ 00:10:08.748 00:10:08.748 real 0m4.614s 00:10:08.748 user 0m5.411s 00:10:08.748 sys 0m0.682s 00:10:08.748 16:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:08.748 16:24:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.748 16:24:21 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:08.748 16:24:21 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:08.748 16:24:21 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:10:08.748 16:24:21 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:08.748 16:24:21 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:08.748 16:24:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:08.748 ************************************ 00:10:08.748 START TEST raid_state_function_test 00:10:08.748 ************************************ 00:10:08.748 16:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 3 false 00:10:08.748 16:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:08.748 16:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:08.748 16:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:08.748 16:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:08.748 16:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:08.748 16:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:08.748 16:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:08.748 16:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:08.748 16:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:08.748 16:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:08.748 16:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:08.748 16:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:08.748 16:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:08.748 16:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:08.748 16:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:08.748 16:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:08.748 16:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:08.748 16:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:08.748 16:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:08.748 16:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:08.748 16:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:08.748 16:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:08.748 16:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:08.748 16:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:08.748 16:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:08.748 16:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:08.748 Process raid pid: 64061 00:10:08.748 16:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=64061 00:10:08.748 16:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64061' 00:10:08.748 16:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 64061 00:10:08.748 16:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 64061 ']' 00:10:08.748 16:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.748 16:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:08.748 16:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.748 16:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:08.748 16:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:08.748 16:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.008 [2024-11-05 16:24:21.886436] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:10:09.008 [2024-11-05 16:24:21.886682] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:09.008 [2024-11-05 16:24:22.066914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.267 [2024-11-05 16:24:22.224800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.528 [2024-11-05 16:24:22.483055] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:09.528 [2024-11-05 16:24:22.483130] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:09.787 16:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:09.787 16:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:10:09.787 16:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:09.787 16:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.787 16:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.787 [2024-11-05 16:24:22.753810] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:09.787 [2024-11-05 16:24:22.753905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:09.787 [2024-11-05 16:24:22.753918] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:09.787 [2024-11-05 16:24:22.753931] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:09.787 [2024-11-05 16:24:22.753940] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:09.787 [2024-11-05 16:24:22.753951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:09.787 16:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.787 16:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:09.787 16:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.787 16:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.787 16:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:09.787 16:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.787 16:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.787 16:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.787 16:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.787 16:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.787 16:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.787 16:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.787 16:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.787 16:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.787 16:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.787 16:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.787 16:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.787 "name": "Existed_Raid", 00:10:09.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.787 "strip_size_kb": 64, 00:10:09.787 "state": "configuring", 00:10:09.787 "raid_level": "raid0", 00:10:09.787 "superblock": false, 00:10:09.787 "num_base_bdevs": 3, 00:10:09.787 "num_base_bdevs_discovered": 0, 00:10:09.787 "num_base_bdevs_operational": 3, 00:10:09.787 "base_bdevs_list": [ 00:10:09.787 { 00:10:09.787 "name": "BaseBdev1", 00:10:09.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.787 "is_configured": false, 00:10:09.787 "data_offset": 0, 00:10:09.787 "data_size": 0 00:10:09.787 }, 00:10:09.787 { 00:10:09.787 "name": "BaseBdev2", 00:10:09.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.787 "is_configured": false, 00:10:09.787 "data_offset": 0, 00:10:09.787 "data_size": 0 00:10:09.787 }, 00:10:09.787 { 00:10:09.787 "name": "BaseBdev3", 00:10:09.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.787 "is_configured": false, 00:10:09.787 "data_offset": 0, 00:10:09.787 "data_size": 0 00:10:09.787 } 00:10:09.787 ] 00:10:09.787 }' 00:10:09.787 16:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.787 16:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.355 [2024-11-05 16:24:23.236866] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:10.355 [2024-11-05 16:24:23.236987] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.355 [2024-11-05 16:24:23.248813] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:10.355 [2024-11-05 16:24:23.248925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:10.355 [2024-11-05 16:24:23.248962] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:10.355 [2024-11-05 16:24:23.248991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:10.355 [2024-11-05 16:24:23.249015] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:10.355 [2024-11-05 16:24:23.249043] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.355 [2024-11-05 16:24:23.297563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:10.355 BaseBdev1 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.355 [ 00:10:10.355 { 00:10:10.355 "name": "BaseBdev1", 00:10:10.355 "aliases": [ 00:10:10.355 "172db6c8-468b-4483-9310-814a7bc1d61d" 00:10:10.355 ], 00:10:10.355 "product_name": "Malloc disk", 00:10:10.355 "block_size": 512, 00:10:10.355 "num_blocks": 65536, 00:10:10.355 "uuid": "172db6c8-468b-4483-9310-814a7bc1d61d", 00:10:10.355 "assigned_rate_limits": { 00:10:10.355 "rw_ios_per_sec": 0, 00:10:10.355 "rw_mbytes_per_sec": 0, 00:10:10.355 "r_mbytes_per_sec": 0, 00:10:10.355 "w_mbytes_per_sec": 0 00:10:10.355 }, 00:10:10.355 "claimed": true, 00:10:10.355 "claim_type": "exclusive_write", 00:10:10.355 "zoned": false, 00:10:10.355 "supported_io_types": { 00:10:10.355 "read": true, 00:10:10.355 "write": true, 00:10:10.355 "unmap": true, 00:10:10.355 "flush": true, 00:10:10.355 "reset": true, 00:10:10.355 "nvme_admin": false, 00:10:10.355 "nvme_io": false, 00:10:10.355 "nvme_io_md": false, 00:10:10.355 "write_zeroes": true, 00:10:10.355 "zcopy": true, 00:10:10.355 "get_zone_info": false, 00:10:10.355 "zone_management": false, 00:10:10.355 "zone_append": false, 00:10:10.355 "compare": false, 00:10:10.355 "compare_and_write": false, 00:10:10.355 "abort": true, 00:10:10.355 "seek_hole": false, 00:10:10.355 "seek_data": false, 00:10:10.355 "copy": true, 00:10:10.355 "nvme_iov_md": false 00:10:10.355 }, 00:10:10.355 "memory_domains": [ 00:10:10.355 { 00:10:10.355 "dma_device_id": "system", 00:10:10.355 "dma_device_type": 1 00:10:10.355 }, 00:10:10.355 { 00:10:10.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.355 "dma_device_type": 2 00:10:10.355 } 00:10:10.355 ], 00:10:10.355 "driver_specific": {} 00:10:10.355 } 00:10:10.355 ] 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.355 "name": "Existed_Raid", 00:10:10.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.355 "strip_size_kb": 64, 00:10:10.355 "state": "configuring", 00:10:10.355 "raid_level": "raid0", 00:10:10.355 "superblock": false, 00:10:10.355 "num_base_bdevs": 3, 00:10:10.355 "num_base_bdevs_discovered": 1, 00:10:10.355 "num_base_bdevs_operational": 3, 00:10:10.355 "base_bdevs_list": [ 00:10:10.355 { 00:10:10.355 "name": "BaseBdev1", 00:10:10.355 "uuid": "172db6c8-468b-4483-9310-814a7bc1d61d", 00:10:10.355 "is_configured": true, 00:10:10.355 "data_offset": 0, 00:10:10.355 "data_size": 65536 00:10:10.355 }, 00:10:10.355 { 00:10:10.355 "name": "BaseBdev2", 00:10:10.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.355 "is_configured": false, 00:10:10.355 "data_offset": 0, 00:10:10.355 "data_size": 0 00:10:10.355 }, 00:10:10.355 { 00:10:10.355 "name": "BaseBdev3", 00:10:10.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.355 "is_configured": false, 00:10:10.355 "data_offset": 0, 00:10:10.355 "data_size": 0 00:10:10.355 } 00:10:10.355 ] 00:10:10.355 }' 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.355 16:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.977 16:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:10.977 16:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.977 16:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.977 [2024-11-05 16:24:23.800770] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:10.977 [2024-11-05 16:24:23.800829] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:10.977 16:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.977 16:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:10.977 16:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.977 16:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.977 [2024-11-05 16:24:23.812768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:10.977 [2024-11-05 16:24:23.814844] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:10.977 [2024-11-05 16:24:23.814887] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:10.977 [2024-11-05 16:24:23.814897] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:10.977 [2024-11-05 16:24:23.814906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:10.977 16:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.977 16:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:10.977 16:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:10.977 16:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:10.977 16:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.977 16:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.977 16:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.977 16:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.977 16:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.977 16:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.977 16:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.977 16:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.977 16:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.977 16:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.977 16:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.978 16:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.978 16:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.978 16:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.978 16:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.978 "name": "Existed_Raid", 00:10:10.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.978 "strip_size_kb": 64, 00:10:10.978 "state": "configuring", 00:10:10.978 "raid_level": "raid0", 00:10:10.978 "superblock": false, 00:10:10.978 "num_base_bdevs": 3, 00:10:10.978 "num_base_bdevs_discovered": 1, 00:10:10.978 "num_base_bdevs_operational": 3, 00:10:10.978 "base_bdevs_list": [ 00:10:10.978 { 00:10:10.978 "name": "BaseBdev1", 00:10:10.978 "uuid": "172db6c8-468b-4483-9310-814a7bc1d61d", 00:10:10.978 "is_configured": true, 00:10:10.978 "data_offset": 0, 00:10:10.978 "data_size": 65536 00:10:10.978 }, 00:10:10.978 { 00:10:10.978 "name": "BaseBdev2", 00:10:10.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.978 "is_configured": false, 00:10:10.978 "data_offset": 0, 00:10:10.978 "data_size": 0 00:10:10.978 }, 00:10:10.978 { 00:10:10.978 "name": "BaseBdev3", 00:10:10.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.978 "is_configured": false, 00:10:10.978 "data_offset": 0, 00:10:10.978 "data_size": 0 00:10:10.978 } 00:10:10.978 ] 00:10:10.978 }' 00:10:10.978 16:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.978 16:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.240 16:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:11.240 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.240 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.240 [2024-11-05 16:24:24.328442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:11.240 BaseBdev2 00:10:11.240 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.499 16:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:11.499 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:11.499 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:11.499 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:11.499 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:11.499 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:11.499 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:11.499 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.499 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.499 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.500 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:11.500 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.500 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.500 [ 00:10:11.500 { 00:10:11.500 "name": "BaseBdev2", 00:10:11.500 "aliases": [ 00:10:11.500 "d498717f-92c7-4d35-8c63-62abdec33101" 00:10:11.500 ], 00:10:11.500 "product_name": "Malloc disk", 00:10:11.500 "block_size": 512, 00:10:11.500 "num_blocks": 65536, 00:10:11.500 "uuid": "d498717f-92c7-4d35-8c63-62abdec33101", 00:10:11.500 "assigned_rate_limits": { 00:10:11.500 "rw_ios_per_sec": 0, 00:10:11.500 "rw_mbytes_per_sec": 0, 00:10:11.500 "r_mbytes_per_sec": 0, 00:10:11.500 "w_mbytes_per_sec": 0 00:10:11.500 }, 00:10:11.500 "claimed": true, 00:10:11.500 "claim_type": "exclusive_write", 00:10:11.500 "zoned": false, 00:10:11.500 "supported_io_types": { 00:10:11.500 "read": true, 00:10:11.500 "write": true, 00:10:11.500 "unmap": true, 00:10:11.500 "flush": true, 00:10:11.500 "reset": true, 00:10:11.500 "nvme_admin": false, 00:10:11.500 "nvme_io": false, 00:10:11.500 "nvme_io_md": false, 00:10:11.500 "write_zeroes": true, 00:10:11.500 "zcopy": true, 00:10:11.500 "get_zone_info": false, 00:10:11.500 "zone_management": false, 00:10:11.500 "zone_append": false, 00:10:11.500 "compare": false, 00:10:11.500 "compare_and_write": false, 00:10:11.500 "abort": true, 00:10:11.500 "seek_hole": false, 00:10:11.500 "seek_data": false, 00:10:11.500 "copy": true, 00:10:11.500 "nvme_iov_md": false 00:10:11.500 }, 00:10:11.500 "memory_domains": [ 00:10:11.500 { 00:10:11.500 "dma_device_id": "system", 00:10:11.500 "dma_device_type": 1 00:10:11.500 }, 00:10:11.500 { 00:10:11.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.500 "dma_device_type": 2 00:10:11.500 } 00:10:11.500 ], 00:10:11.500 "driver_specific": {} 00:10:11.500 } 00:10:11.500 ] 00:10:11.500 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.500 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:11.500 16:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:11.500 16:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:11.500 16:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:11.500 16:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.500 16:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.500 16:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:11.500 16:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.500 16:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.500 16:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.500 16:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.500 16:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.500 16:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.500 16:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.500 16:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.500 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.500 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.500 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.500 16:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.500 "name": "Existed_Raid", 00:10:11.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.500 "strip_size_kb": 64, 00:10:11.500 "state": "configuring", 00:10:11.500 "raid_level": "raid0", 00:10:11.500 "superblock": false, 00:10:11.500 "num_base_bdevs": 3, 00:10:11.500 "num_base_bdevs_discovered": 2, 00:10:11.500 "num_base_bdevs_operational": 3, 00:10:11.500 "base_bdevs_list": [ 00:10:11.500 { 00:10:11.500 "name": "BaseBdev1", 00:10:11.500 "uuid": "172db6c8-468b-4483-9310-814a7bc1d61d", 00:10:11.500 "is_configured": true, 00:10:11.500 "data_offset": 0, 00:10:11.500 "data_size": 65536 00:10:11.500 }, 00:10:11.500 { 00:10:11.500 "name": "BaseBdev2", 00:10:11.500 "uuid": "d498717f-92c7-4d35-8c63-62abdec33101", 00:10:11.500 "is_configured": true, 00:10:11.500 "data_offset": 0, 00:10:11.500 "data_size": 65536 00:10:11.500 }, 00:10:11.500 { 00:10:11.500 "name": "BaseBdev3", 00:10:11.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.500 "is_configured": false, 00:10:11.500 "data_offset": 0, 00:10:11.500 "data_size": 0 00:10:11.500 } 00:10:11.500 ] 00:10:11.500 }' 00:10:11.500 16:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.500 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.068 16:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:12.068 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.068 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.068 [2024-11-05 16:24:24.909350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:12.068 [2024-11-05 16:24:24.909426] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:12.068 [2024-11-05 16:24:24.909445] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:12.068 [2024-11-05 16:24:24.909842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:12.068 [2024-11-05 16:24:24.910053] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:12.068 [2024-11-05 16:24:24.910066] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:12.068 [2024-11-05 16:24:24.910387] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:12.068 BaseBdev3 00:10:12.068 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.068 16:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:12.068 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:12.068 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:12.068 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:12.068 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:12.068 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:12.068 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:12.068 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.068 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.068 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.068 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:12.068 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.068 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.068 [ 00:10:12.068 { 00:10:12.068 "name": "BaseBdev3", 00:10:12.068 "aliases": [ 00:10:12.068 "148835f7-2acc-46ac-a6f8-d86e8ca8a442" 00:10:12.068 ], 00:10:12.068 "product_name": "Malloc disk", 00:10:12.068 "block_size": 512, 00:10:12.068 "num_blocks": 65536, 00:10:12.068 "uuid": "148835f7-2acc-46ac-a6f8-d86e8ca8a442", 00:10:12.068 "assigned_rate_limits": { 00:10:12.068 "rw_ios_per_sec": 0, 00:10:12.068 "rw_mbytes_per_sec": 0, 00:10:12.068 "r_mbytes_per_sec": 0, 00:10:12.068 "w_mbytes_per_sec": 0 00:10:12.068 }, 00:10:12.068 "claimed": true, 00:10:12.068 "claim_type": "exclusive_write", 00:10:12.068 "zoned": false, 00:10:12.068 "supported_io_types": { 00:10:12.068 "read": true, 00:10:12.068 "write": true, 00:10:12.068 "unmap": true, 00:10:12.068 "flush": true, 00:10:12.068 "reset": true, 00:10:12.068 "nvme_admin": false, 00:10:12.068 "nvme_io": false, 00:10:12.068 "nvme_io_md": false, 00:10:12.068 "write_zeroes": true, 00:10:12.068 "zcopy": true, 00:10:12.068 "get_zone_info": false, 00:10:12.068 "zone_management": false, 00:10:12.068 "zone_append": false, 00:10:12.068 "compare": false, 00:10:12.068 "compare_and_write": false, 00:10:12.068 "abort": true, 00:10:12.068 "seek_hole": false, 00:10:12.068 "seek_data": false, 00:10:12.068 "copy": true, 00:10:12.068 "nvme_iov_md": false 00:10:12.068 }, 00:10:12.068 "memory_domains": [ 00:10:12.068 { 00:10:12.068 "dma_device_id": "system", 00:10:12.068 "dma_device_type": 1 00:10:12.068 }, 00:10:12.068 { 00:10:12.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.068 "dma_device_type": 2 00:10:12.068 } 00:10:12.068 ], 00:10:12.068 "driver_specific": {} 00:10:12.068 } 00:10:12.068 ] 00:10:12.068 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.068 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:12.068 16:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:12.068 16:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:12.068 16:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:12.068 16:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.068 16:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:12.068 16:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.068 16:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.068 16:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.068 16:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.068 16:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.068 16:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.068 16:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.068 16:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.068 16:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.068 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.068 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.068 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.068 16:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.068 "name": "Existed_Raid", 00:10:12.068 "uuid": "dbbbcbbe-9b76-42a0-a562-b9a2e31c16da", 00:10:12.068 "strip_size_kb": 64, 00:10:12.068 "state": "online", 00:10:12.068 "raid_level": "raid0", 00:10:12.068 "superblock": false, 00:10:12.068 "num_base_bdevs": 3, 00:10:12.068 "num_base_bdevs_discovered": 3, 00:10:12.068 "num_base_bdevs_operational": 3, 00:10:12.068 "base_bdevs_list": [ 00:10:12.068 { 00:10:12.068 "name": "BaseBdev1", 00:10:12.068 "uuid": "172db6c8-468b-4483-9310-814a7bc1d61d", 00:10:12.068 "is_configured": true, 00:10:12.068 "data_offset": 0, 00:10:12.068 "data_size": 65536 00:10:12.068 }, 00:10:12.068 { 00:10:12.068 "name": "BaseBdev2", 00:10:12.068 "uuid": "d498717f-92c7-4d35-8c63-62abdec33101", 00:10:12.068 "is_configured": true, 00:10:12.068 "data_offset": 0, 00:10:12.068 "data_size": 65536 00:10:12.068 }, 00:10:12.068 { 00:10:12.068 "name": "BaseBdev3", 00:10:12.069 "uuid": "148835f7-2acc-46ac-a6f8-d86e8ca8a442", 00:10:12.069 "is_configured": true, 00:10:12.069 "data_offset": 0, 00:10:12.069 "data_size": 65536 00:10:12.069 } 00:10:12.069 ] 00:10:12.069 }' 00:10:12.069 16:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.069 16:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.326 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:12.326 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:12.326 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:12.326 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:12.326 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:12.326 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:12.327 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:12.327 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:12.327 16:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.327 16:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.327 [2024-11-05 16:24:25.329148] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:12.327 16:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.327 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:12.327 "name": "Existed_Raid", 00:10:12.327 "aliases": [ 00:10:12.327 "dbbbcbbe-9b76-42a0-a562-b9a2e31c16da" 00:10:12.327 ], 00:10:12.327 "product_name": "Raid Volume", 00:10:12.327 "block_size": 512, 00:10:12.327 "num_blocks": 196608, 00:10:12.327 "uuid": "dbbbcbbe-9b76-42a0-a562-b9a2e31c16da", 00:10:12.327 "assigned_rate_limits": { 00:10:12.327 "rw_ios_per_sec": 0, 00:10:12.327 "rw_mbytes_per_sec": 0, 00:10:12.327 "r_mbytes_per_sec": 0, 00:10:12.327 "w_mbytes_per_sec": 0 00:10:12.327 }, 00:10:12.327 "claimed": false, 00:10:12.327 "zoned": false, 00:10:12.327 "supported_io_types": { 00:10:12.327 "read": true, 00:10:12.327 "write": true, 00:10:12.327 "unmap": true, 00:10:12.327 "flush": true, 00:10:12.327 "reset": true, 00:10:12.327 "nvme_admin": false, 00:10:12.327 "nvme_io": false, 00:10:12.327 "nvme_io_md": false, 00:10:12.327 "write_zeroes": true, 00:10:12.327 "zcopy": false, 00:10:12.327 "get_zone_info": false, 00:10:12.327 "zone_management": false, 00:10:12.327 "zone_append": false, 00:10:12.327 "compare": false, 00:10:12.327 "compare_and_write": false, 00:10:12.327 "abort": false, 00:10:12.327 "seek_hole": false, 00:10:12.327 "seek_data": false, 00:10:12.327 "copy": false, 00:10:12.327 "nvme_iov_md": false 00:10:12.327 }, 00:10:12.327 "memory_domains": [ 00:10:12.327 { 00:10:12.327 "dma_device_id": "system", 00:10:12.327 "dma_device_type": 1 00:10:12.327 }, 00:10:12.327 { 00:10:12.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.327 "dma_device_type": 2 00:10:12.327 }, 00:10:12.327 { 00:10:12.327 "dma_device_id": "system", 00:10:12.327 "dma_device_type": 1 00:10:12.327 }, 00:10:12.327 { 00:10:12.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.327 "dma_device_type": 2 00:10:12.327 }, 00:10:12.327 { 00:10:12.327 "dma_device_id": "system", 00:10:12.327 "dma_device_type": 1 00:10:12.327 }, 00:10:12.327 { 00:10:12.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.327 "dma_device_type": 2 00:10:12.327 } 00:10:12.327 ], 00:10:12.327 "driver_specific": { 00:10:12.327 "raid": { 00:10:12.327 "uuid": "dbbbcbbe-9b76-42a0-a562-b9a2e31c16da", 00:10:12.327 "strip_size_kb": 64, 00:10:12.327 "state": "online", 00:10:12.327 "raid_level": "raid0", 00:10:12.327 "superblock": false, 00:10:12.327 "num_base_bdevs": 3, 00:10:12.327 "num_base_bdevs_discovered": 3, 00:10:12.327 "num_base_bdevs_operational": 3, 00:10:12.327 "base_bdevs_list": [ 00:10:12.327 { 00:10:12.327 "name": "BaseBdev1", 00:10:12.327 "uuid": "172db6c8-468b-4483-9310-814a7bc1d61d", 00:10:12.327 "is_configured": true, 00:10:12.327 "data_offset": 0, 00:10:12.327 "data_size": 65536 00:10:12.327 }, 00:10:12.327 { 00:10:12.327 "name": "BaseBdev2", 00:10:12.327 "uuid": "d498717f-92c7-4d35-8c63-62abdec33101", 00:10:12.327 "is_configured": true, 00:10:12.327 "data_offset": 0, 00:10:12.327 "data_size": 65536 00:10:12.327 }, 00:10:12.327 { 00:10:12.327 "name": "BaseBdev3", 00:10:12.327 "uuid": "148835f7-2acc-46ac-a6f8-d86e8ca8a442", 00:10:12.327 "is_configured": true, 00:10:12.327 "data_offset": 0, 00:10:12.327 "data_size": 65536 00:10:12.327 } 00:10:12.327 ] 00:10:12.327 } 00:10:12.327 } 00:10:12.327 }' 00:10:12.327 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:12.327 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:12.327 BaseBdev2 00:10:12.327 BaseBdev3' 00:10:12.327 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.585 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:12.585 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.586 [2024-11-05 16:24:25.552705] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:12.586 [2024-11-05 16:24:25.552805] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:12.586 [2024-11-05 16:24:25.552909] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.586 16:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.845 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.845 "name": "Existed_Raid", 00:10:12.845 "uuid": "dbbbcbbe-9b76-42a0-a562-b9a2e31c16da", 00:10:12.845 "strip_size_kb": 64, 00:10:12.845 "state": "offline", 00:10:12.845 "raid_level": "raid0", 00:10:12.845 "superblock": false, 00:10:12.845 "num_base_bdevs": 3, 00:10:12.845 "num_base_bdevs_discovered": 2, 00:10:12.845 "num_base_bdevs_operational": 2, 00:10:12.845 "base_bdevs_list": [ 00:10:12.845 { 00:10:12.845 "name": null, 00:10:12.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.845 "is_configured": false, 00:10:12.845 "data_offset": 0, 00:10:12.845 "data_size": 65536 00:10:12.845 }, 00:10:12.845 { 00:10:12.845 "name": "BaseBdev2", 00:10:12.845 "uuid": "d498717f-92c7-4d35-8c63-62abdec33101", 00:10:12.845 "is_configured": true, 00:10:12.845 "data_offset": 0, 00:10:12.845 "data_size": 65536 00:10:12.845 }, 00:10:12.845 { 00:10:12.845 "name": "BaseBdev3", 00:10:12.845 "uuid": "148835f7-2acc-46ac-a6f8-d86e8ca8a442", 00:10:12.845 "is_configured": true, 00:10:12.845 "data_offset": 0, 00:10:12.845 "data_size": 65536 00:10:12.845 } 00:10:12.845 ] 00:10:12.845 }' 00:10:12.845 16:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.845 16:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.103 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:13.103 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:13.103 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.103 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.103 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:13.103 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.103 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.103 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:13.103 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:13.103 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:13.103 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.103 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.103 [2024-11-05 16:24:26.051785] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:13.103 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.103 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:13.103 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:13.103 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.103 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.103 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:13.103 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.103 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.104 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:13.104 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:13.104 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:13.104 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.104 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.104 [2024-11-05 16:24:26.192464] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:13.104 [2024-11-05 16:24:26.192566] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:13.362 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.362 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:13.362 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:13.362 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:13.362 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.362 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.362 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.362 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.362 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:13.362 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:13.362 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:13.362 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:13.362 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:13.362 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:13.362 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.362 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.362 BaseBdev2 00:10:13.362 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.362 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:13.362 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:13.362 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:13.362 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:13.362 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:13.362 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:13.362 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:13.362 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.362 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.362 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.362 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:13.362 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.362 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.362 [ 00:10:13.362 { 00:10:13.362 "name": "BaseBdev2", 00:10:13.362 "aliases": [ 00:10:13.362 "0f197e48-0b58-4360-a0b3-003edbaca75e" 00:10:13.362 ], 00:10:13.362 "product_name": "Malloc disk", 00:10:13.362 "block_size": 512, 00:10:13.362 "num_blocks": 65536, 00:10:13.362 "uuid": "0f197e48-0b58-4360-a0b3-003edbaca75e", 00:10:13.362 "assigned_rate_limits": { 00:10:13.362 "rw_ios_per_sec": 0, 00:10:13.362 "rw_mbytes_per_sec": 0, 00:10:13.362 "r_mbytes_per_sec": 0, 00:10:13.362 "w_mbytes_per_sec": 0 00:10:13.362 }, 00:10:13.362 "claimed": false, 00:10:13.362 "zoned": false, 00:10:13.362 "supported_io_types": { 00:10:13.362 "read": true, 00:10:13.362 "write": true, 00:10:13.362 "unmap": true, 00:10:13.362 "flush": true, 00:10:13.362 "reset": true, 00:10:13.362 "nvme_admin": false, 00:10:13.362 "nvme_io": false, 00:10:13.362 "nvme_io_md": false, 00:10:13.362 "write_zeroes": true, 00:10:13.362 "zcopy": true, 00:10:13.362 "get_zone_info": false, 00:10:13.362 "zone_management": false, 00:10:13.362 "zone_append": false, 00:10:13.362 "compare": false, 00:10:13.363 "compare_and_write": false, 00:10:13.363 "abort": true, 00:10:13.363 "seek_hole": false, 00:10:13.363 "seek_data": false, 00:10:13.363 "copy": true, 00:10:13.363 "nvme_iov_md": false 00:10:13.363 }, 00:10:13.363 "memory_domains": [ 00:10:13.363 { 00:10:13.363 "dma_device_id": "system", 00:10:13.363 "dma_device_type": 1 00:10:13.363 }, 00:10:13.363 { 00:10:13.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.363 "dma_device_type": 2 00:10:13.363 } 00:10:13.363 ], 00:10:13.363 "driver_specific": {} 00:10:13.363 } 00:10:13.363 ] 00:10:13.363 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.363 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:13.363 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:13.363 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:13.363 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:13.363 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.363 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.363 BaseBdev3 00:10:13.363 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.363 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:13.363 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:13.363 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:13.363 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:13.363 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:13.363 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:13.363 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:13.363 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.363 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.621 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.621 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:13.621 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.621 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.621 [ 00:10:13.621 { 00:10:13.621 "name": "BaseBdev3", 00:10:13.621 "aliases": [ 00:10:13.621 "5af4534c-2dc4-4d51-b5b6-0968601e38ee" 00:10:13.621 ], 00:10:13.621 "product_name": "Malloc disk", 00:10:13.621 "block_size": 512, 00:10:13.621 "num_blocks": 65536, 00:10:13.621 "uuid": "5af4534c-2dc4-4d51-b5b6-0968601e38ee", 00:10:13.621 "assigned_rate_limits": { 00:10:13.621 "rw_ios_per_sec": 0, 00:10:13.621 "rw_mbytes_per_sec": 0, 00:10:13.621 "r_mbytes_per_sec": 0, 00:10:13.621 "w_mbytes_per_sec": 0 00:10:13.621 }, 00:10:13.621 "claimed": false, 00:10:13.621 "zoned": false, 00:10:13.621 "supported_io_types": { 00:10:13.621 "read": true, 00:10:13.621 "write": true, 00:10:13.621 "unmap": true, 00:10:13.621 "flush": true, 00:10:13.621 "reset": true, 00:10:13.621 "nvme_admin": false, 00:10:13.621 "nvme_io": false, 00:10:13.621 "nvme_io_md": false, 00:10:13.621 "write_zeroes": true, 00:10:13.621 "zcopy": true, 00:10:13.621 "get_zone_info": false, 00:10:13.621 "zone_management": false, 00:10:13.621 "zone_append": false, 00:10:13.621 "compare": false, 00:10:13.621 "compare_and_write": false, 00:10:13.621 "abort": true, 00:10:13.621 "seek_hole": false, 00:10:13.621 "seek_data": false, 00:10:13.621 "copy": true, 00:10:13.621 "nvme_iov_md": false 00:10:13.621 }, 00:10:13.621 "memory_domains": [ 00:10:13.621 { 00:10:13.621 "dma_device_id": "system", 00:10:13.621 "dma_device_type": 1 00:10:13.621 }, 00:10:13.621 { 00:10:13.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.621 "dma_device_type": 2 00:10:13.621 } 00:10:13.621 ], 00:10:13.621 "driver_specific": {} 00:10:13.621 } 00:10:13.621 ] 00:10:13.622 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.622 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:13.622 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:13.622 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:13.622 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:13.622 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.622 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.622 [2024-11-05 16:24:26.469044] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:13.622 [2024-11-05 16:24:26.469170] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:13.622 [2024-11-05 16:24:26.469246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:13.622 [2024-11-05 16:24:26.471620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:13.622 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.622 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:13.622 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.622 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.622 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.622 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.622 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.622 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.622 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.622 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.622 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.622 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.622 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.622 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.622 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.622 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.622 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.622 "name": "Existed_Raid", 00:10:13.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.622 "strip_size_kb": 64, 00:10:13.622 "state": "configuring", 00:10:13.622 "raid_level": "raid0", 00:10:13.622 "superblock": false, 00:10:13.622 "num_base_bdevs": 3, 00:10:13.622 "num_base_bdevs_discovered": 2, 00:10:13.622 "num_base_bdevs_operational": 3, 00:10:13.622 "base_bdevs_list": [ 00:10:13.622 { 00:10:13.622 "name": "BaseBdev1", 00:10:13.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.622 "is_configured": false, 00:10:13.622 "data_offset": 0, 00:10:13.622 "data_size": 0 00:10:13.622 }, 00:10:13.622 { 00:10:13.622 "name": "BaseBdev2", 00:10:13.622 "uuid": "0f197e48-0b58-4360-a0b3-003edbaca75e", 00:10:13.622 "is_configured": true, 00:10:13.622 "data_offset": 0, 00:10:13.622 "data_size": 65536 00:10:13.622 }, 00:10:13.622 { 00:10:13.622 "name": "BaseBdev3", 00:10:13.622 "uuid": "5af4534c-2dc4-4d51-b5b6-0968601e38ee", 00:10:13.622 "is_configured": true, 00:10:13.622 "data_offset": 0, 00:10:13.622 "data_size": 65536 00:10:13.622 } 00:10:13.622 ] 00:10:13.622 }' 00:10:13.622 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.622 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.889 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:13.889 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.889 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.889 [2024-11-05 16:24:26.860750] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:13.889 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.889 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:13.889 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.889 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.889 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.889 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.889 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.889 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.889 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.889 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.889 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.889 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.889 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.889 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.889 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.889 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.889 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.889 "name": "Existed_Raid", 00:10:13.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.889 "strip_size_kb": 64, 00:10:13.889 "state": "configuring", 00:10:13.889 "raid_level": "raid0", 00:10:13.889 "superblock": false, 00:10:13.889 "num_base_bdevs": 3, 00:10:13.889 "num_base_bdevs_discovered": 1, 00:10:13.889 "num_base_bdevs_operational": 3, 00:10:13.889 "base_bdevs_list": [ 00:10:13.889 { 00:10:13.889 "name": "BaseBdev1", 00:10:13.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.889 "is_configured": false, 00:10:13.889 "data_offset": 0, 00:10:13.889 "data_size": 0 00:10:13.889 }, 00:10:13.889 { 00:10:13.889 "name": null, 00:10:13.889 "uuid": "0f197e48-0b58-4360-a0b3-003edbaca75e", 00:10:13.889 "is_configured": false, 00:10:13.889 "data_offset": 0, 00:10:13.889 "data_size": 65536 00:10:13.889 }, 00:10:13.889 { 00:10:13.889 "name": "BaseBdev3", 00:10:13.890 "uuid": "5af4534c-2dc4-4d51-b5b6-0968601e38ee", 00:10:13.890 "is_configured": true, 00:10:13.890 "data_offset": 0, 00:10:13.890 "data_size": 65536 00:10:13.890 } 00:10:13.890 ] 00:10:13.890 }' 00:10:13.890 16:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.890 16:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.455 16:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.455 16:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:14.455 16:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.455 16:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.455 16:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.455 16:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:14.455 16:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:14.455 16:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.455 16:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.455 [2024-11-05 16:24:27.347781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:14.455 BaseBdev1 00:10:14.455 16:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.455 16:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:14.455 16:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:14.455 16:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:14.455 16:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:14.455 16:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:14.455 16:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:14.455 16:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:14.455 16:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.455 16:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.455 16:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.455 16:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:14.455 16:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.455 16:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.455 [ 00:10:14.455 { 00:10:14.455 "name": "BaseBdev1", 00:10:14.455 "aliases": [ 00:10:14.455 "23818b31-ae7e-484e-9cc3-f817e41c132d" 00:10:14.455 ], 00:10:14.455 "product_name": "Malloc disk", 00:10:14.455 "block_size": 512, 00:10:14.455 "num_blocks": 65536, 00:10:14.455 "uuid": "23818b31-ae7e-484e-9cc3-f817e41c132d", 00:10:14.455 "assigned_rate_limits": { 00:10:14.455 "rw_ios_per_sec": 0, 00:10:14.455 "rw_mbytes_per_sec": 0, 00:10:14.455 "r_mbytes_per_sec": 0, 00:10:14.455 "w_mbytes_per_sec": 0 00:10:14.455 }, 00:10:14.455 "claimed": true, 00:10:14.455 "claim_type": "exclusive_write", 00:10:14.455 "zoned": false, 00:10:14.455 "supported_io_types": { 00:10:14.455 "read": true, 00:10:14.455 "write": true, 00:10:14.455 "unmap": true, 00:10:14.455 "flush": true, 00:10:14.455 "reset": true, 00:10:14.455 "nvme_admin": false, 00:10:14.455 "nvme_io": false, 00:10:14.455 "nvme_io_md": false, 00:10:14.455 "write_zeroes": true, 00:10:14.455 "zcopy": true, 00:10:14.455 "get_zone_info": false, 00:10:14.455 "zone_management": false, 00:10:14.455 "zone_append": false, 00:10:14.455 "compare": false, 00:10:14.455 "compare_and_write": false, 00:10:14.455 "abort": true, 00:10:14.455 "seek_hole": false, 00:10:14.455 "seek_data": false, 00:10:14.455 "copy": true, 00:10:14.455 "nvme_iov_md": false 00:10:14.455 }, 00:10:14.455 "memory_domains": [ 00:10:14.455 { 00:10:14.455 "dma_device_id": "system", 00:10:14.455 "dma_device_type": 1 00:10:14.455 }, 00:10:14.455 { 00:10:14.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.456 "dma_device_type": 2 00:10:14.456 } 00:10:14.456 ], 00:10:14.456 "driver_specific": {} 00:10:14.456 } 00:10:14.456 ] 00:10:14.456 16:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.456 16:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:14.456 16:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:14.456 16:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.456 16:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.456 16:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.456 16:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.456 16:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.456 16:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.456 16:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.456 16:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.456 16:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.456 16:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.456 16:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.456 16:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.456 16:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.456 16:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.456 16:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.456 "name": "Existed_Raid", 00:10:14.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.456 "strip_size_kb": 64, 00:10:14.456 "state": "configuring", 00:10:14.456 "raid_level": "raid0", 00:10:14.456 "superblock": false, 00:10:14.456 "num_base_bdevs": 3, 00:10:14.456 "num_base_bdevs_discovered": 2, 00:10:14.456 "num_base_bdevs_operational": 3, 00:10:14.456 "base_bdevs_list": [ 00:10:14.456 { 00:10:14.456 "name": "BaseBdev1", 00:10:14.456 "uuid": "23818b31-ae7e-484e-9cc3-f817e41c132d", 00:10:14.456 "is_configured": true, 00:10:14.456 "data_offset": 0, 00:10:14.456 "data_size": 65536 00:10:14.456 }, 00:10:14.456 { 00:10:14.456 "name": null, 00:10:14.456 "uuid": "0f197e48-0b58-4360-a0b3-003edbaca75e", 00:10:14.456 "is_configured": false, 00:10:14.456 "data_offset": 0, 00:10:14.456 "data_size": 65536 00:10:14.456 }, 00:10:14.456 { 00:10:14.456 "name": "BaseBdev3", 00:10:14.456 "uuid": "5af4534c-2dc4-4d51-b5b6-0968601e38ee", 00:10:14.456 "is_configured": true, 00:10:14.456 "data_offset": 0, 00:10:14.456 "data_size": 65536 00:10:14.456 } 00:10:14.456 ] 00:10:14.456 }' 00:10:14.456 16:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.456 16:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.714 16:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.714 16:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:14.714 16:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.714 16:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.714 16:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.714 16:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:14.714 16:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:14.714 16:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.714 16:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.714 [2024-11-05 16:24:27.791535] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:14.714 16:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.714 16:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:14.714 16:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.714 16:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.714 16:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.714 16:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.714 16:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.714 16:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.714 16:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.714 16:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.714 16:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.714 16:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.714 16:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.714 16:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.714 16:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.041 16:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.041 16:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.041 "name": "Existed_Raid", 00:10:15.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.041 "strip_size_kb": 64, 00:10:15.041 "state": "configuring", 00:10:15.041 "raid_level": "raid0", 00:10:15.041 "superblock": false, 00:10:15.041 "num_base_bdevs": 3, 00:10:15.041 "num_base_bdevs_discovered": 1, 00:10:15.041 "num_base_bdevs_operational": 3, 00:10:15.041 "base_bdevs_list": [ 00:10:15.041 { 00:10:15.041 "name": "BaseBdev1", 00:10:15.041 "uuid": "23818b31-ae7e-484e-9cc3-f817e41c132d", 00:10:15.041 "is_configured": true, 00:10:15.041 "data_offset": 0, 00:10:15.041 "data_size": 65536 00:10:15.041 }, 00:10:15.041 { 00:10:15.041 "name": null, 00:10:15.041 "uuid": "0f197e48-0b58-4360-a0b3-003edbaca75e", 00:10:15.041 "is_configured": false, 00:10:15.041 "data_offset": 0, 00:10:15.041 "data_size": 65536 00:10:15.041 }, 00:10:15.041 { 00:10:15.041 "name": null, 00:10:15.041 "uuid": "5af4534c-2dc4-4d51-b5b6-0968601e38ee", 00:10:15.041 "is_configured": false, 00:10:15.041 "data_offset": 0, 00:10:15.041 "data_size": 65536 00:10:15.041 } 00:10:15.041 ] 00:10:15.041 }' 00:10:15.041 16:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.041 16:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.329 16:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:15.329 16:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.329 16:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.329 16:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.329 16:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.329 16:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:15.329 16:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:15.329 16:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.329 16:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.329 [2024-11-05 16:24:28.294677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:15.329 16:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.329 16:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:15.329 16:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.329 16:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.329 16:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.329 16:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.329 16:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.329 16:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.329 16:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.329 16:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.329 16:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.329 16:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.329 16:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.329 16:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.329 16:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.329 16:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.329 16:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.330 "name": "Existed_Raid", 00:10:15.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.330 "strip_size_kb": 64, 00:10:15.330 "state": "configuring", 00:10:15.330 "raid_level": "raid0", 00:10:15.330 "superblock": false, 00:10:15.330 "num_base_bdevs": 3, 00:10:15.330 "num_base_bdevs_discovered": 2, 00:10:15.330 "num_base_bdevs_operational": 3, 00:10:15.330 "base_bdevs_list": [ 00:10:15.330 { 00:10:15.330 "name": "BaseBdev1", 00:10:15.330 "uuid": "23818b31-ae7e-484e-9cc3-f817e41c132d", 00:10:15.330 "is_configured": true, 00:10:15.330 "data_offset": 0, 00:10:15.330 "data_size": 65536 00:10:15.330 }, 00:10:15.330 { 00:10:15.330 "name": null, 00:10:15.330 "uuid": "0f197e48-0b58-4360-a0b3-003edbaca75e", 00:10:15.330 "is_configured": false, 00:10:15.330 "data_offset": 0, 00:10:15.330 "data_size": 65536 00:10:15.330 }, 00:10:15.330 { 00:10:15.330 "name": "BaseBdev3", 00:10:15.330 "uuid": "5af4534c-2dc4-4d51-b5b6-0968601e38ee", 00:10:15.330 "is_configured": true, 00:10:15.330 "data_offset": 0, 00:10:15.330 "data_size": 65536 00:10:15.330 } 00:10:15.330 ] 00:10:15.330 }' 00:10:15.330 16:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.330 16:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.899 16:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.899 16:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.899 16:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:15.899 16:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.899 16:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.899 16:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:15.899 16:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:15.899 16:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.899 16:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.899 [2024-11-05 16:24:28.793823] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:15.899 16:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.899 16:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:15.899 16:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.899 16:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.899 16:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.899 16:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.899 16:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.899 16:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.899 16:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.899 16:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.899 16:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.899 16:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.899 16:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.899 16:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.899 16:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.899 16:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.899 16:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.899 "name": "Existed_Raid", 00:10:15.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.899 "strip_size_kb": 64, 00:10:15.899 "state": "configuring", 00:10:15.899 "raid_level": "raid0", 00:10:15.899 "superblock": false, 00:10:15.899 "num_base_bdevs": 3, 00:10:15.899 "num_base_bdevs_discovered": 1, 00:10:15.899 "num_base_bdevs_operational": 3, 00:10:15.899 "base_bdevs_list": [ 00:10:15.899 { 00:10:15.899 "name": null, 00:10:15.899 "uuid": "23818b31-ae7e-484e-9cc3-f817e41c132d", 00:10:15.899 "is_configured": false, 00:10:15.899 "data_offset": 0, 00:10:15.899 "data_size": 65536 00:10:15.899 }, 00:10:15.899 { 00:10:15.899 "name": null, 00:10:15.899 "uuid": "0f197e48-0b58-4360-a0b3-003edbaca75e", 00:10:15.899 "is_configured": false, 00:10:15.899 "data_offset": 0, 00:10:15.899 "data_size": 65536 00:10:15.899 }, 00:10:15.899 { 00:10:15.899 "name": "BaseBdev3", 00:10:15.899 "uuid": "5af4534c-2dc4-4d51-b5b6-0968601e38ee", 00:10:15.899 "is_configured": true, 00:10:15.899 "data_offset": 0, 00:10:15.899 "data_size": 65536 00:10:15.899 } 00:10:15.899 ] 00:10:15.899 }' 00:10:15.899 16:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.899 16:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.468 16:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.468 16:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.468 16:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.468 16:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:16.468 16:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.468 16:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:16.468 16:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:16.468 16:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.468 16:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.468 [2024-11-05 16:24:29.454292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:16.468 16:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.468 16:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:16.468 16:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.468 16:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.468 16:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.468 16:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.468 16:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.468 16:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.468 16:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.468 16:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.468 16:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.468 16:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.468 16:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.468 16:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.468 16:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.468 16:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.468 16:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.468 "name": "Existed_Raid", 00:10:16.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.468 "strip_size_kb": 64, 00:10:16.468 "state": "configuring", 00:10:16.468 "raid_level": "raid0", 00:10:16.468 "superblock": false, 00:10:16.468 "num_base_bdevs": 3, 00:10:16.468 "num_base_bdevs_discovered": 2, 00:10:16.468 "num_base_bdevs_operational": 3, 00:10:16.468 "base_bdevs_list": [ 00:10:16.468 { 00:10:16.468 "name": null, 00:10:16.468 "uuid": "23818b31-ae7e-484e-9cc3-f817e41c132d", 00:10:16.468 "is_configured": false, 00:10:16.468 "data_offset": 0, 00:10:16.468 "data_size": 65536 00:10:16.468 }, 00:10:16.468 { 00:10:16.468 "name": "BaseBdev2", 00:10:16.468 "uuid": "0f197e48-0b58-4360-a0b3-003edbaca75e", 00:10:16.468 "is_configured": true, 00:10:16.468 "data_offset": 0, 00:10:16.468 "data_size": 65536 00:10:16.468 }, 00:10:16.468 { 00:10:16.468 "name": "BaseBdev3", 00:10:16.468 "uuid": "5af4534c-2dc4-4d51-b5b6-0968601e38ee", 00:10:16.468 "is_configured": true, 00:10:16.468 "data_offset": 0, 00:10:16.468 "data_size": 65536 00:10:16.468 } 00:10:16.468 ] 00:10:16.468 }' 00:10:16.468 16:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.468 16:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.039 16:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:17.039 16:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.039 16:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.039 16:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.039 16:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.039 16:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:17.039 16:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.039 16:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:17.039 16:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.039 16:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.039 16:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.039 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 23818b31-ae7e-484e-9cc3-f817e41c132d 00:10:17.039 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.039 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.039 [2024-11-05 16:24:30.077244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:17.039 [2024-11-05 16:24:30.077365] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:17.039 [2024-11-05 16:24:30.077399] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:17.039 [2024-11-05 16:24:30.077738] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:17.039 [2024-11-05 16:24:30.077968] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:17.039 [2024-11-05 16:24:30.078019] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:17.039 [2024-11-05 16:24:30.078374] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:17.039 NewBaseBdev 00:10:17.039 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.039 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:17.039 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:10:17.039 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:17.039 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:17.039 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:17.039 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:17.039 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:17.039 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.039 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.039 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.039 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:17.039 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.039 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.039 [ 00:10:17.039 { 00:10:17.039 "name": "NewBaseBdev", 00:10:17.039 "aliases": [ 00:10:17.039 "23818b31-ae7e-484e-9cc3-f817e41c132d" 00:10:17.039 ], 00:10:17.039 "product_name": "Malloc disk", 00:10:17.039 "block_size": 512, 00:10:17.039 "num_blocks": 65536, 00:10:17.039 "uuid": "23818b31-ae7e-484e-9cc3-f817e41c132d", 00:10:17.039 "assigned_rate_limits": { 00:10:17.039 "rw_ios_per_sec": 0, 00:10:17.039 "rw_mbytes_per_sec": 0, 00:10:17.039 "r_mbytes_per_sec": 0, 00:10:17.039 "w_mbytes_per_sec": 0 00:10:17.039 }, 00:10:17.039 "claimed": true, 00:10:17.039 "claim_type": "exclusive_write", 00:10:17.039 "zoned": false, 00:10:17.039 "supported_io_types": { 00:10:17.039 "read": true, 00:10:17.039 "write": true, 00:10:17.039 "unmap": true, 00:10:17.039 "flush": true, 00:10:17.039 "reset": true, 00:10:17.039 "nvme_admin": false, 00:10:17.039 "nvme_io": false, 00:10:17.039 "nvme_io_md": false, 00:10:17.039 "write_zeroes": true, 00:10:17.039 "zcopy": true, 00:10:17.039 "get_zone_info": false, 00:10:17.039 "zone_management": false, 00:10:17.039 "zone_append": false, 00:10:17.039 "compare": false, 00:10:17.039 "compare_and_write": false, 00:10:17.039 "abort": true, 00:10:17.039 "seek_hole": false, 00:10:17.039 "seek_data": false, 00:10:17.039 "copy": true, 00:10:17.039 "nvme_iov_md": false 00:10:17.039 }, 00:10:17.039 "memory_domains": [ 00:10:17.039 { 00:10:17.039 "dma_device_id": "system", 00:10:17.039 "dma_device_type": 1 00:10:17.039 }, 00:10:17.039 { 00:10:17.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.039 "dma_device_type": 2 00:10:17.039 } 00:10:17.039 ], 00:10:17.039 "driver_specific": {} 00:10:17.039 } 00:10:17.039 ] 00:10:17.039 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.039 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:17.039 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:17.039 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.039 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.039 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.039 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.039 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.039 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.039 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.039 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.039 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.039 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.039 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.039 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.039 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.302 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.302 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.302 "name": "Existed_Raid", 00:10:17.302 "uuid": "95849b5c-e390-4d2e-94a0-60984baa8bb1", 00:10:17.302 "strip_size_kb": 64, 00:10:17.302 "state": "online", 00:10:17.302 "raid_level": "raid0", 00:10:17.302 "superblock": false, 00:10:17.302 "num_base_bdevs": 3, 00:10:17.302 "num_base_bdevs_discovered": 3, 00:10:17.302 "num_base_bdevs_operational": 3, 00:10:17.302 "base_bdevs_list": [ 00:10:17.302 { 00:10:17.302 "name": "NewBaseBdev", 00:10:17.302 "uuid": "23818b31-ae7e-484e-9cc3-f817e41c132d", 00:10:17.302 "is_configured": true, 00:10:17.302 "data_offset": 0, 00:10:17.302 "data_size": 65536 00:10:17.302 }, 00:10:17.302 { 00:10:17.302 "name": "BaseBdev2", 00:10:17.302 "uuid": "0f197e48-0b58-4360-a0b3-003edbaca75e", 00:10:17.302 "is_configured": true, 00:10:17.302 "data_offset": 0, 00:10:17.302 "data_size": 65536 00:10:17.302 }, 00:10:17.302 { 00:10:17.302 "name": "BaseBdev3", 00:10:17.302 "uuid": "5af4534c-2dc4-4d51-b5b6-0968601e38ee", 00:10:17.302 "is_configured": true, 00:10:17.302 "data_offset": 0, 00:10:17.302 "data_size": 65536 00:10:17.302 } 00:10:17.302 ] 00:10:17.302 }' 00:10:17.302 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.302 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.568 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:17.568 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:17.568 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:17.568 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:17.568 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:17.568 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:17.568 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:17.568 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:17.568 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.568 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.568 [2024-11-05 16:24:30.573031] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:17.568 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.568 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:17.568 "name": "Existed_Raid", 00:10:17.568 "aliases": [ 00:10:17.568 "95849b5c-e390-4d2e-94a0-60984baa8bb1" 00:10:17.568 ], 00:10:17.568 "product_name": "Raid Volume", 00:10:17.568 "block_size": 512, 00:10:17.568 "num_blocks": 196608, 00:10:17.568 "uuid": "95849b5c-e390-4d2e-94a0-60984baa8bb1", 00:10:17.568 "assigned_rate_limits": { 00:10:17.568 "rw_ios_per_sec": 0, 00:10:17.568 "rw_mbytes_per_sec": 0, 00:10:17.568 "r_mbytes_per_sec": 0, 00:10:17.568 "w_mbytes_per_sec": 0 00:10:17.568 }, 00:10:17.568 "claimed": false, 00:10:17.568 "zoned": false, 00:10:17.568 "supported_io_types": { 00:10:17.568 "read": true, 00:10:17.568 "write": true, 00:10:17.568 "unmap": true, 00:10:17.568 "flush": true, 00:10:17.568 "reset": true, 00:10:17.568 "nvme_admin": false, 00:10:17.568 "nvme_io": false, 00:10:17.568 "nvme_io_md": false, 00:10:17.568 "write_zeroes": true, 00:10:17.568 "zcopy": false, 00:10:17.568 "get_zone_info": false, 00:10:17.568 "zone_management": false, 00:10:17.568 "zone_append": false, 00:10:17.568 "compare": false, 00:10:17.568 "compare_and_write": false, 00:10:17.568 "abort": false, 00:10:17.568 "seek_hole": false, 00:10:17.568 "seek_data": false, 00:10:17.568 "copy": false, 00:10:17.568 "nvme_iov_md": false 00:10:17.568 }, 00:10:17.568 "memory_domains": [ 00:10:17.568 { 00:10:17.568 "dma_device_id": "system", 00:10:17.568 "dma_device_type": 1 00:10:17.568 }, 00:10:17.568 { 00:10:17.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.568 "dma_device_type": 2 00:10:17.568 }, 00:10:17.568 { 00:10:17.568 "dma_device_id": "system", 00:10:17.569 "dma_device_type": 1 00:10:17.569 }, 00:10:17.569 { 00:10:17.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.569 "dma_device_type": 2 00:10:17.569 }, 00:10:17.569 { 00:10:17.569 "dma_device_id": "system", 00:10:17.569 "dma_device_type": 1 00:10:17.569 }, 00:10:17.569 { 00:10:17.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.569 "dma_device_type": 2 00:10:17.569 } 00:10:17.569 ], 00:10:17.569 "driver_specific": { 00:10:17.569 "raid": { 00:10:17.569 "uuid": "95849b5c-e390-4d2e-94a0-60984baa8bb1", 00:10:17.569 "strip_size_kb": 64, 00:10:17.569 "state": "online", 00:10:17.569 "raid_level": "raid0", 00:10:17.569 "superblock": false, 00:10:17.569 "num_base_bdevs": 3, 00:10:17.569 "num_base_bdevs_discovered": 3, 00:10:17.569 "num_base_bdevs_operational": 3, 00:10:17.569 "base_bdevs_list": [ 00:10:17.569 { 00:10:17.569 "name": "NewBaseBdev", 00:10:17.569 "uuid": "23818b31-ae7e-484e-9cc3-f817e41c132d", 00:10:17.569 "is_configured": true, 00:10:17.569 "data_offset": 0, 00:10:17.569 "data_size": 65536 00:10:17.569 }, 00:10:17.569 { 00:10:17.569 "name": "BaseBdev2", 00:10:17.569 "uuid": "0f197e48-0b58-4360-a0b3-003edbaca75e", 00:10:17.569 "is_configured": true, 00:10:17.569 "data_offset": 0, 00:10:17.569 "data_size": 65536 00:10:17.569 }, 00:10:17.569 { 00:10:17.569 "name": "BaseBdev3", 00:10:17.569 "uuid": "5af4534c-2dc4-4d51-b5b6-0968601e38ee", 00:10:17.569 "is_configured": true, 00:10:17.569 "data_offset": 0, 00:10:17.569 "data_size": 65536 00:10:17.569 } 00:10:17.569 ] 00:10:17.569 } 00:10:17.569 } 00:10:17.569 }' 00:10:17.569 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:17.837 BaseBdev2 00:10:17.837 BaseBdev3' 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.837 [2024-11-05 16:24:30.860410] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:17.837 [2024-11-05 16:24:30.860482] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:17.837 [2024-11-05 16:24:30.860613] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:17.837 [2024-11-05 16:24:30.860680] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:17.837 [2024-11-05 16:24:30.860695] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 64061 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 64061 ']' 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 64061 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64061 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64061' 00:10:17.837 killing process with pid 64061 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 64061 00:10:17.837 [2024-11-05 16:24:30.910542] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:17.837 16:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 64061 00:10:18.427 [2024-11-05 16:24:31.280738] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:19.841 16:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:19.841 00:10:19.841 real 0m10.818s 00:10:19.841 user 0m17.006s 00:10:19.841 sys 0m1.727s 00:10:19.841 ************************************ 00:10:19.841 END TEST raid_state_function_test 00:10:19.841 ************************************ 00:10:19.841 16:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:19.841 16:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.841 16:24:32 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:10:19.841 16:24:32 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:19.841 16:24:32 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:19.841 16:24:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:19.841 ************************************ 00:10:19.841 START TEST raid_state_function_test_sb 00:10:19.841 ************************************ 00:10:19.841 16:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 3 true 00:10:19.841 16:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:19.841 16:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:19.841 16:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:19.841 16:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:19.841 16:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:19.841 16:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:19.841 16:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:19.841 16:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:19.841 16:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:19.841 16:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:19.841 16:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:19.841 16:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:19.841 16:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:19.841 16:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:19.841 16:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:19.841 16:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:19.841 16:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:19.841 16:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:19.841 16:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:19.841 16:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:19.841 16:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:19.842 16:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:19.842 16:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:19.842 16:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:19.842 16:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:19.842 16:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:19.842 16:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64682 00:10:19.842 16:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:19.842 16:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64682' 00:10:19.842 Process raid pid: 64682 00:10:19.842 16:24:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64682 00:10:19.842 16:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 64682 ']' 00:10:19.842 16:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.842 16:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:19.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.842 16:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.842 16:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:19.842 16:24:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.842 [2024-11-05 16:24:32.781232] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:10:19.842 [2024-11-05 16:24:32.781375] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.101 [2024-11-05 16:24:32.963177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.101 [2024-11-05 16:24:33.107246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.360 [2024-11-05 16:24:33.350190] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.360 [2024-11-05 16:24:33.350316] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.646 16:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:20.646 16:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:10:20.646 16:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:20.646 16:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.646 16:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.646 [2024-11-05 16:24:33.692571] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:20.646 [2024-11-05 16:24:33.692627] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:20.646 [2024-11-05 16:24:33.692640] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:20.646 [2024-11-05 16:24:33.692651] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:20.646 [2024-11-05 16:24:33.692659] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:20.646 [2024-11-05 16:24:33.692668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:20.646 16:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.646 16:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:20.646 16:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.646 16:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.646 16:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.646 16:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.646 16:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.646 16:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.646 16:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.646 16:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.646 16:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.646 16:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.646 16:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.646 16:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.646 16:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.646 16:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.905 16:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.905 "name": "Existed_Raid", 00:10:20.905 "uuid": "d8425407-a48b-44a8-81b6-f70d0567d3c4", 00:10:20.905 "strip_size_kb": 64, 00:10:20.905 "state": "configuring", 00:10:20.905 "raid_level": "raid0", 00:10:20.905 "superblock": true, 00:10:20.905 "num_base_bdevs": 3, 00:10:20.905 "num_base_bdevs_discovered": 0, 00:10:20.905 "num_base_bdevs_operational": 3, 00:10:20.905 "base_bdevs_list": [ 00:10:20.905 { 00:10:20.905 "name": "BaseBdev1", 00:10:20.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.905 "is_configured": false, 00:10:20.905 "data_offset": 0, 00:10:20.905 "data_size": 0 00:10:20.905 }, 00:10:20.905 { 00:10:20.905 "name": "BaseBdev2", 00:10:20.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.905 "is_configured": false, 00:10:20.905 "data_offset": 0, 00:10:20.905 "data_size": 0 00:10:20.905 }, 00:10:20.905 { 00:10:20.905 "name": "BaseBdev3", 00:10:20.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.905 "is_configured": false, 00:10:20.905 "data_offset": 0, 00:10:20.905 "data_size": 0 00:10:20.905 } 00:10:20.905 ] 00:10:20.905 }' 00:10:20.905 16:24:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.905 16:24:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.164 16:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:21.165 16:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.165 16:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.165 [2024-11-05 16:24:34.167683] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:21.165 [2024-11-05 16:24:34.167788] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:21.165 16:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.165 16:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:21.165 16:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.165 16:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.165 [2024-11-05 16:24:34.175674] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:21.165 [2024-11-05 16:24:34.175726] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:21.165 [2024-11-05 16:24:34.175738] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:21.165 [2024-11-05 16:24:34.175749] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:21.165 [2024-11-05 16:24:34.175756] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:21.165 [2024-11-05 16:24:34.175767] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:21.165 16:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.165 16:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:21.165 16:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.165 16:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.165 [2024-11-05 16:24:34.228013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:21.165 BaseBdev1 00:10:21.165 16:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.165 16:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:21.165 16:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:21.165 16:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:21.165 16:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:21.165 16:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:21.165 16:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:21.165 16:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:21.165 16:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.165 16:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.165 16:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.165 16:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:21.165 16:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.165 16:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.165 [ 00:10:21.165 { 00:10:21.165 "name": "BaseBdev1", 00:10:21.165 "aliases": [ 00:10:21.165 "89ff12fd-65e6-4ae4-8765-9f1cc0fca91e" 00:10:21.165 ], 00:10:21.165 "product_name": "Malloc disk", 00:10:21.165 "block_size": 512, 00:10:21.165 "num_blocks": 65536, 00:10:21.165 "uuid": "89ff12fd-65e6-4ae4-8765-9f1cc0fca91e", 00:10:21.165 "assigned_rate_limits": { 00:10:21.165 "rw_ios_per_sec": 0, 00:10:21.165 "rw_mbytes_per_sec": 0, 00:10:21.165 "r_mbytes_per_sec": 0, 00:10:21.165 "w_mbytes_per_sec": 0 00:10:21.165 }, 00:10:21.165 "claimed": true, 00:10:21.165 "claim_type": "exclusive_write", 00:10:21.165 "zoned": false, 00:10:21.425 "supported_io_types": { 00:10:21.425 "read": true, 00:10:21.425 "write": true, 00:10:21.425 "unmap": true, 00:10:21.425 "flush": true, 00:10:21.425 "reset": true, 00:10:21.425 "nvme_admin": false, 00:10:21.425 "nvme_io": false, 00:10:21.425 "nvme_io_md": false, 00:10:21.425 "write_zeroes": true, 00:10:21.425 "zcopy": true, 00:10:21.425 "get_zone_info": false, 00:10:21.425 "zone_management": false, 00:10:21.425 "zone_append": false, 00:10:21.425 "compare": false, 00:10:21.425 "compare_and_write": false, 00:10:21.425 "abort": true, 00:10:21.425 "seek_hole": false, 00:10:21.425 "seek_data": false, 00:10:21.425 "copy": true, 00:10:21.425 "nvme_iov_md": false 00:10:21.425 }, 00:10:21.425 "memory_domains": [ 00:10:21.425 { 00:10:21.425 "dma_device_id": "system", 00:10:21.425 "dma_device_type": 1 00:10:21.425 }, 00:10:21.425 { 00:10:21.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.425 "dma_device_type": 2 00:10:21.425 } 00:10:21.425 ], 00:10:21.425 "driver_specific": {} 00:10:21.425 } 00:10:21.425 ] 00:10:21.425 16:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.425 16:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:21.425 16:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:21.425 16:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.425 16:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.425 16:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:21.425 16:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.425 16:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.425 16:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.425 16:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.425 16:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.425 16:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.425 16:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.425 16:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.425 16:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.425 16:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.425 16:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.425 16:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.425 "name": "Existed_Raid", 00:10:21.425 "uuid": "c087a941-2b00-4960-a5bb-37162f0ceebf", 00:10:21.425 "strip_size_kb": 64, 00:10:21.425 "state": "configuring", 00:10:21.425 "raid_level": "raid0", 00:10:21.425 "superblock": true, 00:10:21.425 "num_base_bdevs": 3, 00:10:21.425 "num_base_bdevs_discovered": 1, 00:10:21.425 "num_base_bdevs_operational": 3, 00:10:21.425 "base_bdevs_list": [ 00:10:21.425 { 00:10:21.425 "name": "BaseBdev1", 00:10:21.425 "uuid": "89ff12fd-65e6-4ae4-8765-9f1cc0fca91e", 00:10:21.425 "is_configured": true, 00:10:21.425 "data_offset": 2048, 00:10:21.425 "data_size": 63488 00:10:21.425 }, 00:10:21.425 { 00:10:21.425 "name": "BaseBdev2", 00:10:21.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.425 "is_configured": false, 00:10:21.425 "data_offset": 0, 00:10:21.425 "data_size": 0 00:10:21.425 }, 00:10:21.425 { 00:10:21.425 "name": "BaseBdev3", 00:10:21.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.425 "is_configured": false, 00:10:21.425 "data_offset": 0, 00:10:21.425 "data_size": 0 00:10:21.425 } 00:10:21.425 ] 00:10:21.425 }' 00:10:21.425 16:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.425 16:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.685 16:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:21.685 16:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.685 16:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.685 [2024-11-05 16:24:34.719285] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:21.685 [2024-11-05 16:24:34.719421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:21.685 16:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.685 16:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:21.685 16:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.685 16:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.685 [2024-11-05 16:24:34.727319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:21.685 [2024-11-05 16:24:34.729260] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:21.685 [2024-11-05 16:24:34.729305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:21.685 [2024-11-05 16:24:34.729316] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:21.685 [2024-11-05 16:24:34.729325] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:21.685 16:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.686 16:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:21.686 16:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:21.686 16:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:21.686 16:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.686 16:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.686 16:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:21.686 16:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.686 16:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.686 16:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.686 16:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.686 16:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.686 16:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.686 16:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.686 16:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.686 16:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.686 16:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.686 16:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.945 16:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.945 "name": "Existed_Raid", 00:10:21.945 "uuid": "f0da10db-031b-4294-9ced-0c6aa81bce37", 00:10:21.945 "strip_size_kb": 64, 00:10:21.945 "state": "configuring", 00:10:21.945 "raid_level": "raid0", 00:10:21.945 "superblock": true, 00:10:21.945 "num_base_bdevs": 3, 00:10:21.945 "num_base_bdevs_discovered": 1, 00:10:21.945 "num_base_bdevs_operational": 3, 00:10:21.945 "base_bdevs_list": [ 00:10:21.945 { 00:10:21.945 "name": "BaseBdev1", 00:10:21.945 "uuid": "89ff12fd-65e6-4ae4-8765-9f1cc0fca91e", 00:10:21.945 "is_configured": true, 00:10:21.945 "data_offset": 2048, 00:10:21.945 "data_size": 63488 00:10:21.945 }, 00:10:21.945 { 00:10:21.945 "name": "BaseBdev2", 00:10:21.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.945 "is_configured": false, 00:10:21.945 "data_offset": 0, 00:10:21.945 "data_size": 0 00:10:21.945 }, 00:10:21.945 { 00:10:21.945 "name": "BaseBdev3", 00:10:21.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.945 "is_configured": false, 00:10:21.945 "data_offset": 0, 00:10:21.945 "data_size": 0 00:10:21.945 } 00:10:21.945 ] 00:10:21.945 }' 00:10:21.945 16:24:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.945 16:24:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.205 16:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:22.205 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.205 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.205 [2024-11-05 16:24:35.201573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:22.205 BaseBdev2 00:10:22.205 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.205 16:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:22.205 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:22.205 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:22.205 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:22.205 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:22.205 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:22.205 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:22.205 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.205 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.205 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.205 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:22.205 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.205 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.205 [ 00:10:22.205 { 00:10:22.205 "name": "BaseBdev2", 00:10:22.205 "aliases": [ 00:10:22.205 "04998fb4-41f5-49ea-ac04-be5f70fa22bd" 00:10:22.205 ], 00:10:22.205 "product_name": "Malloc disk", 00:10:22.205 "block_size": 512, 00:10:22.205 "num_blocks": 65536, 00:10:22.205 "uuid": "04998fb4-41f5-49ea-ac04-be5f70fa22bd", 00:10:22.205 "assigned_rate_limits": { 00:10:22.205 "rw_ios_per_sec": 0, 00:10:22.205 "rw_mbytes_per_sec": 0, 00:10:22.205 "r_mbytes_per_sec": 0, 00:10:22.205 "w_mbytes_per_sec": 0 00:10:22.205 }, 00:10:22.205 "claimed": true, 00:10:22.205 "claim_type": "exclusive_write", 00:10:22.205 "zoned": false, 00:10:22.205 "supported_io_types": { 00:10:22.205 "read": true, 00:10:22.205 "write": true, 00:10:22.205 "unmap": true, 00:10:22.205 "flush": true, 00:10:22.205 "reset": true, 00:10:22.205 "nvme_admin": false, 00:10:22.205 "nvme_io": false, 00:10:22.205 "nvme_io_md": false, 00:10:22.205 "write_zeroes": true, 00:10:22.205 "zcopy": true, 00:10:22.205 "get_zone_info": false, 00:10:22.205 "zone_management": false, 00:10:22.205 "zone_append": false, 00:10:22.205 "compare": false, 00:10:22.205 "compare_and_write": false, 00:10:22.205 "abort": true, 00:10:22.205 "seek_hole": false, 00:10:22.205 "seek_data": false, 00:10:22.205 "copy": true, 00:10:22.205 "nvme_iov_md": false 00:10:22.205 }, 00:10:22.205 "memory_domains": [ 00:10:22.205 { 00:10:22.205 "dma_device_id": "system", 00:10:22.205 "dma_device_type": 1 00:10:22.205 }, 00:10:22.205 { 00:10:22.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.205 "dma_device_type": 2 00:10:22.205 } 00:10:22.205 ], 00:10:22.205 "driver_specific": {} 00:10:22.205 } 00:10:22.205 ] 00:10:22.205 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.205 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:22.205 16:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:22.205 16:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:22.205 16:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:22.205 16:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.205 16:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.205 16:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:22.205 16:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.205 16:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:22.205 16:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.205 16:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.205 16:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.205 16:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.205 16:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.205 16:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.205 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.205 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.205 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.205 16:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.205 "name": "Existed_Raid", 00:10:22.205 "uuid": "f0da10db-031b-4294-9ced-0c6aa81bce37", 00:10:22.205 "strip_size_kb": 64, 00:10:22.205 "state": "configuring", 00:10:22.205 "raid_level": "raid0", 00:10:22.205 "superblock": true, 00:10:22.205 "num_base_bdevs": 3, 00:10:22.205 "num_base_bdevs_discovered": 2, 00:10:22.206 "num_base_bdevs_operational": 3, 00:10:22.206 "base_bdevs_list": [ 00:10:22.206 { 00:10:22.206 "name": "BaseBdev1", 00:10:22.206 "uuid": "89ff12fd-65e6-4ae4-8765-9f1cc0fca91e", 00:10:22.206 "is_configured": true, 00:10:22.206 "data_offset": 2048, 00:10:22.206 "data_size": 63488 00:10:22.206 }, 00:10:22.206 { 00:10:22.206 "name": "BaseBdev2", 00:10:22.206 "uuid": "04998fb4-41f5-49ea-ac04-be5f70fa22bd", 00:10:22.206 "is_configured": true, 00:10:22.206 "data_offset": 2048, 00:10:22.206 "data_size": 63488 00:10:22.206 }, 00:10:22.206 { 00:10:22.206 "name": "BaseBdev3", 00:10:22.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.206 "is_configured": false, 00:10:22.206 "data_offset": 0, 00:10:22.206 "data_size": 0 00:10:22.206 } 00:10:22.206 ] 00:10:22.206 }' 00:10:22.206 16:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.206 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.773 16:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:22.774 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.774 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.774 [2024-11-05 16:24:35.776549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:22.774 [2024-11-05 16:24:35.776955] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:22.774 [2024-11-05 16:24:35.777026] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:22.774 [2024-11-05 16:24:35.777354] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:22.774 [2024-11-05 16:24:35.777586] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:22.774 [2024-11-05 16:24:35.777633] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:22.774 BaseBdev3 00:10:22.774 [2024-11-05 16:24:35.777838] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.774 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.774 16:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:22.774 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:22.774 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:22.774 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:22.774 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:22.774 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:22.774 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:22.774 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.774 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.774 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.774 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:22.774 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.774 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.774 [ 00:10:22.774 { 00:10:22.774 "name": "BaseBdev3", 00:10:22.774 "aliases": [ 00:10:22.774 "2dc6d76f-e96f-4e15-973a-bff06d423c15" 00:10:22.774 ], 00:10:22.774 "product_name": "Malloc disk", 00:10:22.774 "block_size": 512, 00:10:22.774 "num_blocks": 65536, 00:10:22.774 "uuid": "2dc6d76f-e96f-4e15-973a-bff06d423c15", 00:10:22.774 "assigned_rate_limits": { 00:10:22.774 "rw_ios_per_sec": 0, 00:10:22.774 "rw_mbytes_per_sec": 0, 00:10:22.774 "r_mbytes_per_sec": 0, 00:10:22.774 "w_mbytes_per_sec": 0 00:10:22.774 }, 00:10:22.774 "claimed": true, 00:10:22.774 "claim_type": "exclusive_write", 00:10:22.774 "zoned": false, 00:10:22.774 "supported_io_types": { 00:10:22.774 "read": true, 00:10:22.774 "write": true, 00:10:22.774 "unmap": true, 00:10:22.774 "flush": true, 00:10:22.774 "reset": true, 00:10:22.774 "nvme_admin": false, 00:10:22.774 "nvme_io": false, 00:10:22.774 "nvme_io_md": false, 00:10:22.774 "write_zeroes": true, 00:10:22.774 "zcopy": true, 00:10:22.774 "get_zone_info": false, 00:10:22.774 "zone_management": false, 00:10:22.774 "zone_append": false, 00:10:22.774 "compare": false, 00:10:22.774 "compare_and_write": false, 00:10:22.774 "abort": true, 00:10:22.774 "seek_hole": false, 00:10:22.774 "seek_data": false, 00:10:22.774 "copy": true, 00:10:22.774 "nvme_iov_md": false 00:10:22.774 }, 00:10:22.774 "memory_domains": [ 00:10:22.774 { 00:10:22.774 "dma_device_id": "system", 00:10:22.774 "dma_device_type": 1 00:10:22.774 }, 00:10:22.774 { 00:10:22.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.774 "dma_device_type": 2 00:10:22.774 } 00:10:22.774 ], 00:10:22.774 "driver_specific": {} 00:10:22.774 } 00:10:22.774 ] 00:10:22.774 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.774 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:22.774 16:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:22.774 16:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:22.774 16:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:22.774 16:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.774 16:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:22.774 16:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:22.774 16:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.774 16:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:22.774 16:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.774 16:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.774 16:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.774 16:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.774 16:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.774 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.774 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.774 16:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.774 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.032 16:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.032 "name": "Existed_Raid", 00:10:23.032 "uuid": "f0da10db-031b-4294-9ced-0c6aa81bce37", 00:10:23.032 "strip_size_kb": 64, 00:10:23.032 "state": "online", 00:10:23.032 "raid_level": "raid0", 00:10:23.032 "superblock": true, 00:10:23.033 "num_base_bdevs": 3, 00:10:23.033 "num_base_bdevs_discovered": 3, 00:10:23.033 "num_base_bdevs_operational": 3, 00:10:23.033 "base_bdevs_list": [ 00:10:23.033 { 00:10:23.033 "name": "BaseBdev1", 00:10:23.033 "uuid": "89ff12fd-65e6-4ae4-8765-9f1cc0fca91e", 00:10:23.033 "is_configured": true, 00:10:23.033 "data_offset": 2048, 00:10:23.033 "data_size": 63488 00:10:23.033 }, 00:10:23.033 { 00:10:23.033 "name": "BaseBdev2", 00:10:23.033 "uuid": "04998fb4-41f5-49ea-ac04-be5f70fa22bd", 00:10:23.033 "is_configured": true, 00:10:23.033 "data_offset": 2048, 00:10:23.033 "data_size": 63488 00:10:23.033 }, 00:10:23.033 { 00:10:23.033 "name": "BaseBdev3", 00:10:23.033 "uuid": "2dc6d76f-e96f-4e15-973a-bff06d423c15", 00:10:23.033 "is_configured": true, 00:10:23.033 "data_offset": 2048, 00:10:23.033 "data_size": 63488 00:10:23.033 } 00:10:23.033 ] 00:10:23.033 }' 00:10:23.033 16:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.033 16:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.291 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:23.291 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:23.291 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:23.291 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:23.291 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:23.291 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:23.291 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:23.291 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:23.291 16:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.291 16:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.291 [2024-11-05 16:24:36.244209] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:23.291 16:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.291 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:23.291 "name": "Existed_Raid", 00:10:23.291 "aliases": [ 00:10:23.291 "f0da10db-031b-4294-9ced-0c6aa81bce37" 00:10:23.291 ], 00:10:23.291 "product_name": "Raid Volume", 00:10:23.291 "block_size": 512, 00:10:23.291 "num_blocks": 190464, 00:10:23.291 "uuid": "f0da10db-031b-4294-9ced-0c6aa81bce37", 00:10:23.291 "assigned_rate_limits": { 00:10:23.291 "rw_ios_per_sec": 0, 00:10:23.291 "rw_mbytes_per_sec": 0, 00:10:23.291 "r_mbytes_per_sec": 0, 00:10:23.291 "w_mbytes_per_sec": 0 00:10:23.291 }, 00:10:23.291 "claimed": false, 00:10:23.291 "zoned": false, 00:10:23.291 "supported_io_types": { 00:10:23.291 "read": true, 00:10:23.291 "write": true, 00:10:23.292 "unmap": true, 00:10:23.292 "flush": true, 00:10:23.292 "reset": true, 00:10:23.292 "nvme_admin": false, 00:10:23.292 "nvme_io": false, 00:10:23.292 "nvme_io_md": false, 00:10:23.292 "write_zeroes": true, 00:10:23.292 "zcopy": false, 00:10:23.292 "get_zone_info": false, 00:10:23.292 "zone_management": false, 00:10:23.292 "zone_append": false, 00:10:23.292 "compare": false, 00:10:23.292 "compare_and_write": false, 00:10:23.292 "abort": false, 00:10:23.292 "seek_hole": false, 00:10:23.292 "seek_data": false, 00:10:23.292 "copy": false, 00:10:23.292 "nvme_iov_md": false 00:10:23.292 }, 00:10:23.292 "memory_domains": [ 00:10:23.292 { 00:10:23.292 "dma_device_id": "system", 00:10:23.292 "dma_device_type": 1 00:10:23.292 }, 00:10:23.292 { 00:10:23.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.292 "dma_device_type": 2 00:10:23.292 }, 00:10:23.292 { 00:10:23.292 "dma_device_id": "system", 00:10:23.292 "dma_device_type": 1 00:10:23.292 }, 00:10:23.292 { 00:10:23.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.292 "dma_device_type": 2 00:10:23.292 }, 00:10:23.292 { 00:10:23.292 "dma_device_id": "system", 00:10:23.292 "dma_device_type": 1 00:10:23.292 }, 00:10:23.292 { 00:10:23.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.292 "dma_device_type": 2 00:10:23.292 } 00:10:23.292 ], 00:10:23.292 "driver_specific": { 00:10:23.292 "raid": { 00:10:23.292 "uuid": "f0da10db-031b-4294-9ced-0c6aa81bce37", 00:10:23.292 "strip_size_kb": 64, 00:10:23.292 "state": "online", 00:10:23.292 "raid_level": "raid0", 00:10:23.292 "superblock": true, 00:10:23.292 "num_base_bdevs": 3, 00:10:23.292 "num_base_bdevs_discovered": 3, 00:10:23.292 "num_base_bdevs_operational": 3, 00:10:23.292 "base_bdevs_list": [ 00:10:23.292 { 00:10:23.292 "name": "BaseBdev1", 00:10:23.292 "uuid": "89ff12fd-65e6-4ae4-8765-9f1cc0fca91e", 00:10:23.292 "is_configured": true, 00:10:23.292 "data_offset": 2048, 00:10:23.292 "data_size": 63488 00:10:23.292 }, 00:10:23.292 { 00:10:23.292 "name": "BaseBdev2", 00:10:23.292 "uuid": "04998fb4-41f5-49ea-ac04-be5f70fa22bd", 00:10:23.292 "is_configured": true, 00:10:23.292 "data_offset": 2048, 00:10:23.292 "data_size": 63488 00:10:23.292 }, 00:10:23.292 { 00:10:23.292 "name": "BaseBdev3", 00:10:23.292 "uuid": "2dc6d76f-e96f-4e15-973a-bff06d423c15", 00:10:23.292 "is_configured": true, 00:10:23.292 "data_offset": 2048, 00:10:23.292 "data_size": 63488 00:10:23.292 } 00:10:23.292 ] 00:10:23.292 } 00:10:23.292 } 00:10:23.292 }' 00:10:23.292 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:23.292 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:23.292 BaseBdev2 00:10:23.292 BaseBdev3' 00:10:23.292 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.551 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.552 [2024-11-05 16:24:36.519455] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:23.552 [2024-11-05 16:24:36.519488] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:23.552 [2024-11-05 16:24:36.519561] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.552 16:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.811 16:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.811 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.811 "name": "Existed_Raid", 00:10:23.811 "uuid": "f0da10db-031b-4294-9ced-0c6aa81bce37", 00:10:23.811 "strip_size_kb": 64, 00:10:23.811 "state": "offline", 00:10:23.811 "raid_level": "raid0", 00:10:23.811 "superblock": true, 00:10:23.811 "num_base_bdevs": 3, 00:10:23.811 "num_base_bdevs_discovered": 2, 00:10:23.811 "num_base_bdevs_operational": 2, 00:10:23.811 "base_bdevs_list": [ 00:10:23.811 { 00:10:23.811 "name": null, 00:10:23.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.811 "is_configured": false, 00:10:23.811 "data_offset": 0, 00:10:23.811 "data_size": 63488 00:10:23.811 }, 00:10:23.811 { 00:10:23.811 "name": "BaseBdev2", 00:10:23.811 "uuid": "04998fb4-41f5-49ea-ac04-be5f70fa22bd", 00:10:23.811 "is_configured": true, 00:10:23.811 "data_offset": 2048, 00:10:23.811 "data_size": 63488 00:10:23.811 }, 00:10:23.811 { 00:10:23.811 "name": "BaseBdev3", 00:10:23.811 "uuid": "2dc6d76f-e96f-4e15-973a-bff06d423c15", 00:10:23.811 "is_configured": true, 00:10:23.811 "data_offset": 2048, 00:10:23.811 "data_size": 63488 00:10:23.811 } 00:10:23.811 ] 00:10:23.811 }' 00:10:23.811 16:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.811 16:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.070 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:24.070 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:24.070 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:24.070 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.070 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.070 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.070 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.070 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:24.070 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:24.071 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:24.071 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.071 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.071 [2024-11-05 16:24:37.141464] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:24.330 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.330 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:24.330 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:24.330 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:24.330 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.330 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.330 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.330 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.330 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:24.330 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:24.330 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:24.330 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.330 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.330 [2024-11-05 16:24:37.301019] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:24.330 [2024-11-05 16:24:37.301079] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:24.330 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.330 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:24.330 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:24.330 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:24.330 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.330 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.330 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.591 BaseBdev2 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.591 [ 00:10:24.591 { 00:10:24.591 "name": "BaseBdev2", 00:10:24.591 "aliases": [ 00:10:24.591 "abf297f3-f6c7-4e98-9343-9b020421592f" 00:10:24.591 ], 00:10:24.591 "product_name": "Malloc disk", 00:10:24.591 "block_size": 512, 00:10:24.591 "num_blocks": 65536, 00:10:24.591 "uuid": "abf297f3-f6c7-4e98-9343-9b020421592f", 00:10:24.591 "assigned_rate_limits": { 00:10:24.591 "rw_ios_per_sec": 0, 00:10:24.591 "rw_mbytes_per_sec": 0, 00:10:24.591 "r_mbytes_per_sec": 0, 00:10:24.591 "w_mbytes_per_sec": 0 00:10:24.591 }, 00:10:24.591 "claimed": false, 00:10:24.591 "zoned": false, 00:10:24.591 "supported_io_types": { 00:10:24.591 "read": true, 00:10:24.591 "write": true, 00:10:24.591 "unmap": true, 00:10:24.591 "flush": true, 00:10:24.591 "reset": true, 00:10:24.591 "nvme_admin": false, 00:10:24.591 "nvme_io": false, 00:10:24.591 "nvme_io_md": false, 00:10:24.591 "write_zeroes": true, 00:10:24.591 "zcopy": true, 00:10:24.591 "get_zone_info": false, 00:10:24.591 "zone_management": false, 00:10:24.591 "zone_append": false, 00:10:24.591 "compare": false, 00:10:24.591 "compare_and_write": false, 00:10:24.591 "abort": true, 00:10:24.591 "seek_hole": false, 00:10:24.591 "seek_data": false, 00:10:24.591 "copy": true, 00:10:24.591 "nvme_iov_md": false 00:10:24.591 }, 00:10:24.591 "memory_domains": [ 00:10:24.591 { 00:10:24.591 "dma_device_id": "system", 00:10:24.591 "dma_device_type": 1 00:10:24.591 }, 00:10:24.591 { 00:10:24.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.591 "dma_device_type": 2 00:10:24.591 } 00:10:24.591 ], 00:10:24.591 "driver_specific": {} 00:10:24.591 } 00:10:24.591 ] 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.591 BaseBdev3 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.591 [ 00:10:24.591 { 00:10:24.591 "name": "BaseBdev3", 00:10:24.591 "aliases": [ 00:10:24.591 "f9960851-c4fe-4d44-b7f7-ae150370790c" 00:10:24.591 ], 00:10:24.591 "product_name": "Malloc disk", 00:10:24.591 "block_size": 512, 00:10:24.591 "num_blocks": 65536, 00:10:24.591 "uuid": "f9960851-c4fe-4d44-b7f7-ae150370790c", 00:10:24.591 "assigned_rate_limits": { 00:10:24.591 "rw_ios_per_sec": 0, 00:10:24.591 "rw_mbytes_per_sec": 0, 00:10:24.591 "r_mbytes_per_sec": 0, 00:10:24.591 "w_mbytes_per_sec": 0 00:10:24.591 }, 00:10:24.591 "claimed": false, 00:10:24.591 "zoned": false, 00:10:24.591 "supported_io_types": { 00:10:24.591 "read": true, 00:10:24.591 "write": true, 00:10:24.591 "unmap": true, 00:10:24.591 "flush": true, 00:10:24.591 "reset": true, 00:10:24.591 "nvme_admin": false, 00:10:24.591 "nvme_io": false, 00:10:24.591 "nvme_io_md": false, 00:10:24.591 "write_zeroes": true, 00:10:24.591 "zcopy": true, 00:10:24.591 "get_zone_info": false, 00:10:24.591 "zone_management": false, 00:10:24.591 "zone_append": false, 00:10:24.591 "compare": false, 00:10:24.591 "compare_and_write": false, 00:10:24.591 "abort": true, 00:10:24.591 "seek_hole": false, 00:10:24.591 "seek_data": false, 00:10:24.591 "copy": true, 00:10:24.591 "nvme_iov_md": false 00:10:24.591 }, 00:10:24.591 "memory_domains": [ 00:10:24.591 { 00:10:24.591 "dma_device_id": "system", 00:10:24.591 "dma_device_type": 1 00:10:24.591 }, 00:10:24.591 { 00:10:24.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.591 "dma_device_type": 2 00:10:24.591 } 00:10:24.591 ], 00:10:24.591 "driver_specific": {} 00:10:24.591 } 00:10:24.591 ] 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.591 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.591 [2024-11-05 16:24:37.628992] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:24.592 [2024-11-05 16:24:37.629092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:24.592 [2024-11-05 16:24:37.629149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:24.592 [2024-11-05 16:24:37.631325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:24.592 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.592 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:24.592 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.592 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.592 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:24.592 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.592 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:24.592 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.592 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.592 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.592 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.592 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.592 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.592 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.592 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.592 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.592 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.592 "name": "Existed_Raid", 00:10:24.592 "uuid": "6748268f-f3b4-4c9e-9ac3-b6d0fb887e37", 00:10:24.592 "strip_size_kb": 64, 00:10:24.592 "state": "configuring", 00:10:24.592 "raid_level": "raid0", 00:10:24.592 "superblock": true, 00:10:24.592 "num_base_bdevs": 3, 00:10:24.592 "num_base_bdevs_discovered": 2, 00:10:24.592 "num_base_bdevs_operational": 3, 00:10:24.592 "base_bdevs_list": [ 00:10:24.592 { 00:10:24.592 "name": "BaseBdev1", 00:10:24.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.592 "is_configured": false, 00:10:24.592 "data_offset": 0, 00:10:24.592 "data_size": 0 00:10:24.592 }, 00:10:24.592 { 00:10:24.592 "name": "BaseBdev2", 00:10:24.592 "uuid": "abf297f3-f6c7-4e98-9343-9b020421592f", 00:10:24.592 "is_configured": true, 00:10:24.592 "data_offset": 2048, 00:10:24.592 "data_size": 63488 00:10:24.592 }, 00:10:24.592 { 00:10:24.592 "name": "BaseBdev3", 00:10:24.592 "uuid": "f9960851-c4fe-4d44-b7f7-ae150370790c", 00:10:24.592 "is_configured": true, 00:10:24.592 "data_offset": 2048, 00:10:24.592 "data_size": 63488 00:10:24.592 } 00:10:24.592 ] 00:10:24.592 }' 00:10:24.592 16:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.592 16:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.161 16:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:25.161 16:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.161 16:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.161 [2024-11-05 16:24:38.096452] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:25.161 16:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.161 16:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:25.161 16:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.161 16:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.161 16:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:25.161 16:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.161 16:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.161 16:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.161 16:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.161 16:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.161 16:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.161 16:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.161 16:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.161 16:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.161 16:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.161 16:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.161 16:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.161 "name": "Existed_Raid", 00:10:25.161 "uuid": "6748268f-f3b4-4c9e-9ac3-b6d0fb887e37", 00:10:25.161 "strip_size_kb": 64, 00:10:25.161 "state": "configuring", 00:10:25.161 "raid_level": "raid0", 00:10:25.161 "superblock": true, 00:10:25.161 "num_base_bdevs": 3, 00:10:25.161 "num_base_bdevs_discovered": 1, 00:10:25.161 "num_base_bdevs_operational": 3, 00:10:25.161 "base_bdevs_list": [ 00:10:25.161 { 00:10:25.161 "name": "BaseBdev1", 00:10:25.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.161 "is_configured": false, 00:10:25.161 "data_offset": 0, 00:10:25.161 "data_size": 0 00:10:25.161 }, 00:10:25.161 { 00:10:25.161 "name": null, 00:10:25.161 "uuid": "abf297f3-f6c7-4e98-9343-9b020421592f", 00:10:25.161 "is_configured": false, 00:10:25.161 "data_offset": 0, 00:10:25.161 "data_size": 63488 00:10:25.161 }, 00:10:25.161 { 00:10:25.161 "name": "BaseBdev3", 00:10:25.161 "uuid": "f9960851-c4fe-4d44-b7f7-ae150370790c", 00:10:25.161 "is_configured": true, 00:10:25.161 "data_offset": 2048, 00:10:25.161 "data_size": 63488 00:10:25.161 } 00:10:25.161 ] 00:10:25.161 }' 00:10:25.161 16:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.161 16:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.729 16:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.730 [2024-11-05 16:24:38.668247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:25.730 BaseBdev1 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.730 [ 00:10:25.730 { 00:10:25.730 "name": "BaseBdev1", 00:10:25.730 "aliases": [ 00:10:25.730 "7d675242-c8b2-4320-a303-e18509625ff1" 00:10:25.730 ], 00:10:25.730 "product_name": "Malloc disk", 00:10:25.730 "block_size": 512, 00:10:25.730 "num_blocks": 65536, 00:10:25.730 "uuid": "7d675242-c8b2-4320-a303-e18509625ff1", 00:10:25.730 "assigned_rate_limits": { 00:10:25.730 "rw_ios_per_sec": 0, 00:10:25.730 "rw_mbytes_per_sec": 0, 00:10:25.730 "r_mbytes_per_sec": 0, 00:10:25.730 "w_mbytes_per_sec": 0 00:10:25.730 }, 00:10:25.730 "claimed": true, 00:10:25.730 "claim_type": "exclusive_write", 00:10:25.730 "zoned": false, 00:10:25.730 "supported_io_types": { 00:10:25.730 "read": true, 00:10:25.730 "write": true, 00:10:25.730 "unmap": true, 00:10:25.730 "flush": true, 00:10:25.730 "reset": true, 00:10:25.730 "nvme_admin": false, 00:10:25.730 "nvme_io": false, 00:10:25.730 "nvme_io_md": false, 00:10:25.730 "write_zeroes": true, 00:10:25.730 "zcopy": true, 00:10:25.730 "get_zone_info": false, 00:10:25.730 "zone_management": false, 00:10:25.730 "zone_append": false, 00:10:25.730 "compare": false, 00:10:25.730 "compare_and_write": false, 00:10:25.730 "abort": true, 00:10:25.730 "seek_hole": false, 00:10:25.730 "seek_data": false, 00:10:25.730 "copy": true, 00:10:25.730 "nvme_iov_md": false 00:10:25.730 }, 00:10:25.730 "memory_domains": [ 00:10:25.730 { 00:10:25.730 "dma_device_id": "system", 00:10:25.730 "dma_device_type": 1 00:10:25.730 }, 00:10:25.730 { 00:10:25.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.730 "dma_device_type": 2 00:10:25.730 } 00:10:25.730 ], 00:10:25.730 "driver_specific": {} 00:10:25.730 } 00:10:25.730 ] 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.730 "name": "Existed_Raid", 00:10:25.730 "uuid": "6748268f-f3b4-4c9e-9ac3-b6d0fb887e37", 00:10:25.730 "strip_size_kb": 64, 00:10:25.730 "state": "configuring", 00:10:25.730 "raid_level": "raid0", 00:10:25.730 "superblock": true, 00:10:25.730 "num_base_bdevs": 3, 00:10:25.730 "num_base_bdevs_discovered": 2, 00:10:25.730 "num_base_bdevs_operational": 3, 00:10:25.730 "base_bdevs_list": [ 00:10:25.730 { 00:10:25.730 "name": "BaseBdev1", 00:10:25.730 "uuid": "7d675242-c8b2-4320-a303-e18509625ff1", 00:10:25.730 "is_configured": true, 00:10:25.730 "data_offset": 2048, 00:10:25.730 "data_size": 63488 00:10:25.730 }, 00:10:25.730 { 00:10:25.730 "name": null, 00:10:25.730 "uuid": "abf297f3-f6c7-4e98-9343-9b020421592f", 00:10:25.730 "is_configured": false, 00:10:25.730 "data_offset": 0, 00:10:25.730 "data_size": 63488 00:10:25.730 }, 00:10:25.730 { 00:10:25.730 "name": "BaseBdev3", 00:10:25.730 "uuid": "f9960851-c4fe-4d44-b7f7-ae150370790c", 00:10:25.730 "is_configured": true, 00:10:25.730 "data_offset": 2048, 00:10:25.730 "data_size": 63488 00:10:25.730 } 00:10:25.730 ] 00:10:25.730 }' 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.730 16:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.308 16:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.308 16:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.308 16:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.308 16:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:26.308 16:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.308 16:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:26.308 16:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:26.308 16:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.308 16:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.308 [2024-11-05 16:24:39.227390] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:26.308 16:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.308 16:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:26.308 16:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.308 16:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.308 16:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:26.308 16:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.308 16:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.308 16:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.308 16:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.308 16:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.308 16:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.308 16:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.308 16:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.308 16:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.308 16:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.308 16:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.308 16:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.308 "name": "Existed_Raid", 00:10:26.308 "uuid": "6748268f-f3b4-4c9e-9ac3-b6d0fb887e37", 00:10:26.308 "strip_size_kb": 64, 00:10:26.308 "state": "configuring", 00:10:26.308 "raid_level": "raid0", 00:10:26.308 "superblock": true, 00:10:26.308 "num_base_bdevs": 3, 00:10:26.308 "num_base_bdevs_discovered": 1, 00:10:26.308 "num_base_bdevs_operational": 3, 00:10:26.308 "base_bdevs_list": [ 00:10:26.308 { 00:10:26.308 "name": "BaseBdev1", 00:10:26.308 "uuid": "7d675242-c8b2-4320-a303-e18509625ff1", 00:10:26.308 "is_configured": true, 00:10:26.308 "data_offset": 2048, 00:10:26.308 "data_size": 63488 00:10:26.308 }, 00:10:26.308 { 00:10:26.308 "name": null, 00:10:26.308 "uuid": "abf297f3-f6c7-4e98-9343-9b020421592f", 00:10:26.308 "is_configured": false, 00:10:26.308 "data_offset": 0, 00:10:26.308 "data_size": 63488 00:10:26.308 }, 00:10:26.308 { 00:10:26.308 "name": null, 00:10:26.308 "uuid": "f9960851-c4fe-4d44-b7f7-ae150370790c", 00:10:26.308 "is_configured": false, 00:10:26.308 "data_offset": 0, 00:10:26.308 "data_size": 63488 00:10:26.308 } 00:10:26.308 ] 00:10:26.308 }' 00:10:26.308 16:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.308 16:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.876 16:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:26.876 16:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.876 16:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.876 16:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.876 16:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.876 16:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:26.876 16:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:26.876 16:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.876 16:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.876 [2024-11-05 16:24:39.774512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:26.876 16:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.876 16:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:26.876 16:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.876 16:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.876 16:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:26.876 16:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.876 16:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.876 16:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.876 16:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.876 16:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.876 16:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.876 16:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.876 16:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.876 16:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.876 16:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.876 16:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.876 16:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.876 "name": "Existed_Raid", 00:10:26.876 "uuid": "6748268f-f3b4-4c9e-9ac3-b6d0fb887e37", 00:10:26.876 "strip_size_kb": 64, 00:10:26.876 "state": "configuring", 00:10:26.876 "raid_level": "raid0", 00:10:26.876 "superblock": true, 00:10:26.876 "num_base_bdevs": 3, 00:10:26.876 "num_base_bdevs_discovered": 2, 00:10:26.877 "num_base_bdevs_operational": 3, 00:10:26.877 "base_bdevs_list": [ 00:10:26.877 { 00:10:26.877 "name": "BaseBdev1", 00:10:26.877 "uuid": "7d675242-c8b2-4320-a303-e18509625ff1", 00:10:26.877 "is_configured": true, 00:10:26.877 "data_offset": 2048, 00:10:26.877 "data_size": 63488 00:10:26.877 }, 00:10:26.877 { 00:10:26.877 "name": null, 00:10:26.877 "uuid": "abf297f3-f6c7-4e98-9343-9b020421592f", 00:10:26.877 "is_configured": false, 00:10:26.877 "data_offset": 0, 00:10:26.877 "data_size": 63488 00:10:26.877 }, 00:10:26.877 { 00:10:26.877 "name": "BaseBdev3", 00:10:26.877 "uuid": "f9960851-c4fe-4d44-b7f7-ae150370790c", 00:10:26.877 "is_configured": true, 00:10:26.877 "data_offset": 2048, 00:10:26.877 "data_size": 63488 00:10:26.877 } 00:10:26.877 ] 00:10:26.877 }' 00:10:26.877 16:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.877 16:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.445 16:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.445 16:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:27.445 16:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.445 16:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.445 16:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.445 16:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:27.445 16:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:27.445 16:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.445 16:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.445 [2024-11-05 16:24:40.289739] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:27.445 16:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.445 16:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:27.445 16:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.445 16:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.445 16:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:27.445 16:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.445 16:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.445 16:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.445 16:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.445 16:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.445 16:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.445 16:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.445 16:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.445 16:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.445 16:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.445 16:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.445 16:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.445 "name": "Existed_Raid", 00:10:27.445 "uuid": "6748268f-f3b4-4c9e-9ac3-b6d0fb887e37", 00:10:27.445 "strip_size_kb": 64, 00:10:27.445 "state": "configuring", 00:10:27.445 "raid_level": "raid0", 00:10:27.445 "superblock": true, 00:10:27.445 "num_base_bdevs": 3, 00:10:27.445 "num_base_bdevs_discovered": 1, 00:10:27.445 "num_base_bdevs_operational": 3, 00:10:27.445 "base_bdevs_list": [ 00:10:27.445 { 00:10:27.445 "name": null, 00:10:27.445 "uuid": "7d675242-c8b2-4320-a303-e18509625ff1", 00:10:27.445 "is_configured": false, 00:10:27.445 "data_offset": 0, 00:10:27.445 "data_size": 63488 00:10:27.445 }, 00:10:27.445 { 00:10:27.445 "name": null, 00:10:27.445 "uuid": "abf297f3-f6c7-4e98-9343-9b020421592f", 00:10:27.445 "is_configured": false, 00:10:27.445 "data_offset": 0, 00:10:27.445 "data_size": 63488 00:10:27.445 }, 00:10:27.445 { 00:10:27.445 "name": "BaseBdev3", 00:10:27.445 "uuid": "f9960851-c4fe-4d44-b7f7-ae150370790c", 00:10:27.445 "is_configured": true, 00:10:27.445 "data_offset": 2048, 00:10:27.445 "data_size": 63488 00:10:27.445 } 00:10:27.445 ] 00:10:27.445 }' 00:10:27.445 16:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.445 16:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.014 16:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.015 16:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:28.015 16:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.015 16:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.015 16:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.015 16:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:28.015 16:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:28.015 16:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.015 16:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.015 [2024-11-05 16:24:40.886054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:28.015 16:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.015 16:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:28.015 16:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.015 16:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.015 16:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:28.015 16:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.015 16:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:28.015 16:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.015 16:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.015 16:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.015 16:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.015 16:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.015 16:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.015 16:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.015 16:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.015 16:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.015 16:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.015 "name": "Existed_Raid", 00:10:28.015 "uuid": "6748268f-f3b4-4c9e-9ac3-b6d0fb887e37", 00:10:28.015 "strip_size_kb": 64, 00:10:28.015 "state": "configuring", 00:10:28.015 "raid_level": "raid0", 00:10:28.015 "superblock": true, 00:10:28.015 "num_base_bdevs": 3, 00:10:28.015 "num_base_bdevs_discovered": 2, 00:10:28.015 "num_base_bdevs_operational": 3, 00:10:28.015 "base_bdevs_list": [ 00:10:28.015 { 00:10:28.015 "name": null, 00:10:28.015 "uuid": "7d675242-c8b2-4320-a303-e18509625ff1", 00:10:28.015 "is_configured": false, 00:10:28.015 "data_offset": 0, 00:10:28.015 "data_size": 63488 00:10:28.015 }, 00:10:28.015 { 00:10:28.015 "name": "BaseBdev2", 00:10:28.015 "uuid": "abf297f3-f6c7-4e98-9343-9b020421592f", 00:10:28.015 "is_configured": true, 00:10:28.015 "data_offset": 2048, 00:10:28.015 "data_size": 63488 00:10:28.015 }, 00:10:28.015 { 00:10:28.015 "name": "BaseBdev3", 00:10:28.015 "uuid": "f9960851-c4fe-4d44-b7f7-ae150370790c", 00:10:28.015 "is_configured": true, 00:10:28.015 "data_offset": 2048, 00:10:28.015 "data_size": 63488 00:10:28.015 } 00:10:28.015 ] 00:10:28.015 }' 00:10:28.015 16:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.015 16:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7d675242-c8b2-4320-a303-e18509625ff1 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.584 [2024-11-05 16:24:41.528207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:28.584 [2024-11-05 16:24:41.528592] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:28.584 [2024-11-05 16:24:41.528623] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:28.584 [2024-11-05 16:24:41.528910] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:28.584 [2024-11-05 16:24:41.529075] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:28.584 [2024-11-05 16:24:41.529087] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:28.584 NewBaseBdev 00:10:28.584 [2024-11-05 16:24:41.529232] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.584 [ 00:10:28.584 { 00:10:28.584 "name": "NewBaseBdev", 00:10:28.584 "aliases": [ 00:10:28.584 "7d675242-c8b2-4320-a303-e18509625ff1" 00:10:28.584 ], 00:10:28.584 "product_name": "Malloc disk", 00:10:28.584 "block_size": 512, 00:10:28.584 "num_blocks": 65536, 00:10:28.584 "uuid": "7d675242-c8b2-4320-a303-e18509625ff1", 00:10:28.584 "assigned_rate_limits": { 00:10:28.584 "rw_ios_per_sec": 0, 00:10:28.584 "rw_mbytes_per_sec": 0, 00:10:28.584 "r_mbytes_per_sec": 0, 00:10:28.584 "w_mbytes_per_sec": 0 00:10:28.584 }, 00:10:28.584 "claimed": true, 00:10:28.584 "claim_type": "exclusive_write", 00:10:28.584 "zoned": false, 00:10:28.584 "supported_io_types": { 00:10:28.584 "read": true, 00:10:28.584 "write": true, 00:10:28.584 "unmap": true, 00:10:28.584 "flush": true, 00:10:28.584 "reset": true, 00:10:28.584 "nvme_admin": false, 00:10:28.584 "nvme_io": false, 00:10:28.584 "nvme_io_md": false, 00:10:28.584 "write_zeroes": true, 00:10:28.584 "zcopy": true, 00:10:28.584 "get_zone_info": false, 00:10:28.584 "zone_management": false, 00:10:28.584 "zone_append": false, 00:10:28.584 "compare": false, 00:10:28.584 "compare_and_write": false, 00:10:28.584 "abort": true, 00:10:28.584 "seek_hole": false, 00:10:28.584 "seek_data": false, 00:10:28.584 "copy": true, 00:10:28.584 "nvme_iov_md": false 00:10:28.584 }, 00:10:28.584 "memory_domains": [ 00:10:28.584 { 00:10:28.584 "dma_device_id": "system", 00:10:28.584 "dma_device_type": 1 00:10:28.584 }, 00:10:28.584 { 00:10:28.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.584 "dma_device_type": 2 00:10:28.584 } 00:10:28.584 ], 00:10:28.584 "driver_specific": {} 00:10:28.584 } 00:10:28.584 ] 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.584 16:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.584 "name": "Existed_Raid", 00:10:28.584 "uuid": "6748268f-f3b4-4c9e-9ac3-b6d0fb887e37", 00:10:28.584 "strip_size_kb": 64, 00:10:28.584 "state": "online", 00:10:28.584 "raid_level": "raid0", 00:10:28.584 "superblock": true, 00:10:28.584 "num_base_bdevs": 3, 00:10:28.584 "num_base_bdevs_discovered": 3, 00:10:28.584 "num_base_bdevs_operational": 3, 00:10:28.584 "base_bdevs_list": [ 00:10:28.584 { 00:10:28.584 "name": "NewBaseBdev", 00:10:28.584 "uuid": "7d675242-c8b2-4320-a303-e18509625ff1", 00:10:28.584 "is_configured": true, 00:10:28.584 "data_offset": 2048, 00:10:28.584 "data_size": 63488 00:10:28.584 }, 00:10:28.584 { 00:10:28.584 "name": "BaseBdev2", 00:10:28.585 "uuid": "abf297f3-f6c7-4e98-9343-9b020421592f", 00:10:28.585 "is_configured": true, 00:10:28.585 "data_offset": 2048, 00:10:28.585 "data_size": 63488 00:10:28.585 }, 00:10:28.585 { 00:10:28.585 "name": "BaseBdev3", 00:10:28.585 "uuid": "f9960851-c4fe-4d44-b7f7-ae150370790c", 00:10:28.585 "is_configured": true, 00:10:28.585 "data_offset": 2048, 00:10:28.585 "data_size": 63488 00:10:28.585 } 00:10:28.585 ] 00:10:28.585 }' 00:10:28.585 16:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.585 16:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.153 16:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:29.153 16:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:29.153 16:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:29.153 16:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:29.153 16:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:29.153 16:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:29.153 16:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:29.153 16:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:29.153 16:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.153 16:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.153 [2024-11-05 16:24:42.019816] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:29.153 16:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.153 16:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:29.153 "name": "Existed_Raid", 00:10:29.153 "aliases": [ 00:10:29.153 "6748268f-f3b4-4c9e-9ac3-b6d0fb887e37" 00:10:29.153 ], 00:10:29.153 "product_name": "Raid Volume", 00:10:29.153 "block_size": 512, 00:10:29.153 "num_blocks": 190464, 00:10:29.153 "uuid": "6748268f-f3b4-4c9e-9ac3-b6d0fb887e37", 00:10:29.153 "assigned_rate_limits": { 00:10:29.153 "rw_ios_per_sec": 0, 00:10:29.153 "rw_mbytes_per_sec": 0, 00:10:29.153 "r_mbytes_per_sec": 0, 00:10:29.153 "w_mbytes_per_sec": 0 00:10:29.153 }, 00:10:29.153 "claimed": false, 00:10:29.153 "zoned": false, 00:10:29.153 "supported_io_types": { 00:10:29.153 "read": true, 00:10:29.153 "write": true, 00:10:29.153 "unmap": true, 00:10:29.153 "flush": true, 00:10:29.153 "reset": true, 00:10:29.153 "nvme_admin": false, 00:10:29.153 "nvme_io": false, 00:10:29.153 "nvme_io_md": false, 00:10:29.153 "write_zeroes": true, 00:10:29.153 "zcopy": false, 00:10:29.153 "get_zone_info": false, 00:10:29.153 "zone_management": false, 00:10:29.153 "zone_append": false, 00:10:29.153 "compare": false, 00:10:29.153 "compare_and_write": false, 00:10:29.153 "abort": false, 00:10:29.153 "seek_hole": false, 00:10:29.153 "seek_data": false, 00:10:29.153 "copy": false, 00:10:29.153 "nvme_iov_md": false 00:10:29.153 }, 00:10:29.153 "memory_domains": [ 00:10:29.153 { 00:10:29.153 "dma_device_id": "system", 00:10:29.153 "dma_device_type": 1 00:10:29.153 }, 00:10:29.153 { 00:10:29.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.153 "dma_device_type": 2 00:10:29.153 }, 00:10:29.153 { 00:10:29.153 "dma_device_id": "system", 00:10:29.153 "dma_device_type": 1 00:10:29.153 }, 00:10:29.153 { 00:10:29.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.153 "dma_device_type": 2 00:10:29.153 }, 00:10:29.153 { 00:10:29.153 "dma_device_id": "system", 00:10:29.153 "dma_device_type": 1 00:10:29.153 }, 00:10:29.153 { 00:10:29.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.153 "dma_device_type": 2 00:10:29.153 } 00:10:29.153 ], 00:10:29.153 "driver_specific": { 00:10:29.153 "raid": { 00:10:29.153 "uuid": "6748268f-f3b4-4c9e-9ac3-b6d0fb887e37", 00:10:29.153 "strip_size_kb": 64, 00:10:29.153 "state": "online", 00:10:29.153 "raid_level": "raid0", 00:10:29.153 "superblock": true, 00:10:29.153 "num_base_bdevs": 3, 00:10:29.153 "num_base_bdevs_discovered": 3, 00:10:29.153 "num_base_bdevs_operational": 3, 00:10:29.153 "base_bdevs_list": [ 00:10:29.153 { 00:10:29.153 "name": "NewBaseBdev", 00:10:29.153 "uuid": "7d675242-c8b2-4320-a303-e18509625ff1", 00:10:29.153 "is_configured": true, 00:10:29.153 "data_offset": 2048, 00:10:29.153 "data_size": 63488 00:10:29.153 }, 00:10:29.153 { 00:10:29.153 "name": "BaseBdev2", 00:10:29.153 "uuid": "abf297f3-f6c7-4e98-9343-9b020421592f", 00:10:29.153 "is_configured": true, 00:10:29.153 "data_offset": 2048, 00:10:29.153 "data_size": 63488 00:10:29.153 }, 00:10:29.153 { 00:10:29.153 "name": "BaseBdev3", 00:10:29.153 "uuid": "f9960851-c4fe-4d44-b7f7-ae150370790c", 00:10:29.153 "is_configured": true, 00:10:29.153 "data_offset": 2048, 00:10:29.153 "data_size": 63488 00:10:29.153 } 00:10:29.153 ] 00:10:29.153 } 00:10:29.153 } 00:10:29.153 }' 00:10:29.153 16:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:29.153 16:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:29.153 BaseBdev2 00:10:29.153 BaseBdev3' 00:10:29.153 16:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.153 16:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:29.153 16:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.153 16:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:29.154 16:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.154 16:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.154 16:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.154 16:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.154 16:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.154 16:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.154 16:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.154 16:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.154 16:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:29.154 16:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.154 16:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.154 16:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.154 16:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.154 16:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.154 16:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.154 16:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:29.154 16:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.154 16:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.154 16:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.154 16:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.413 16:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.413 16:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.413 16:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:29.413 16:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.413 16:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.413 [2024-11-05 16:24:42.263039] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:29.413 [2024-11-05 16:24:42.263070] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:29.413 [2024-11-05 16:24:42.263170] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:29.413 [2024-11-05 16:24:42.263232] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:29.413 [2024-11-05 16:24:42.263245] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:29.413 16:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.413 16:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64682 00:10:29.413 16:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 64682 ']' 00:10:29.413 16:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 64682 00:10:29.413 16:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:10:29.413 16:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:29.413 16:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64682 00:10:29.413 killing process with pid 64682 00:10:29.413 16:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:29.413 16:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:29.413 16:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64682' 00:10:29.413 16:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 64682 00:10:29.413 16:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 64682 00:10:29.413 [2024-11-05 16:24:42.290224] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:29.672 [2024-11-05 16:24:42.626567] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:31.047 16:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:31.047 00:10:31.047 real 0m11.134s 00:10:31.047 user 0m17.704s 00:10:31.047 sys 0m1.841s 00:10:31.047 16:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:31.047 16:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.047 ************************************ 00:10:31.047 END TEST raid_state_function_test_sb 00:10:31.047 ************************************ 00:10:31.047 16:24:43 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:10:31.047 16:24:43 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:31.047 16:24:43 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:31.047 16:24:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:31.047 ************************************ 00:10:31.047 START TEST raid_superblock_test 00:10:31.047 ************************************ 00:10:31.047 16:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 3 00:10:31.047 16:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:31.047 16:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:31.047 16:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:31.047 16:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:31.047 16:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:31.047 16:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:31.047 16:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:31.047 16:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:31.047 16:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:31.048 16:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:31.048 16:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:31.048 16:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:31.048 16:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:31.048 16:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:31.048 16:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:31.048 16:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:31.048 16:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65309 00:10:31.048 16:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:31.048 16:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65309 00:10:31.048 16:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 65309 ']' 00:10:31.048 16:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.048 16:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:31.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.048 16:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.048 16:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:31.048 16:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.048 [2024-11-05 16:24:43.960116] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:10:31.048 [2024-11-05 16:24:43.960244] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65309 ] 00:10:31.048 [2024-11-05 16:24:44.131263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.306 [2024-11-05 16:24:44.250639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.566 [2024-11-05 16:24:44.465601] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:31.566 [2024-11-05 16:24:44.465768] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:31.825 16:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:31.825 16:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:10:31.825 16:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:31.825 16:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:31.825 16:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:31.825 16:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:31.825 16:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:31.825 16:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:31.825 16:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:31.825 16:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:31.825 16:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:31.825 16:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.825 16:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.825 malloc1 00:10:31.825 16:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.825 16:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:31.825 16:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.825 16:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.825 [2024-11-05 16:24:44.875755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:31.825 [2024-11-05 16:24:44.875839] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.825 [2024-11-05 16:24:44.875866] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:31.825 [2024-11-05 16:24:44.875877] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.825 [2024-11-05 16:24:44.878377] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.825 [2024-11-05 16:24:44.878476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:31.825 pt1 00:10:31.825 16:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.825 16:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:31.825 16:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:31.825 16:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:31.825 16:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:31.825 16:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:31.825 16:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:31.825 16:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:31.825 16:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:31.825 16:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:31.825 16:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.825 16:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.084 malloc2 00:10:32.085 16:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.085 16:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:32.085 16:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.085 16:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.085 [2024-11-05 16:24:44.937052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:32.085 [2024-11-05 16:24:44.937122] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:32.085 [2024-11-05 16:24:44.937149] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:32.085 [2024-11-05 16:24:44.937160] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:32.085 [2024-11-05 16:24:44.939647] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:32.085 [2024-11-05 16:24:44.939688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:32.085 pt2 00:10:32.085 16:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.085 16:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:32.085 16:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:32.085 16:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:32.085 16:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:32.085 16:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:32.085 16:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:32.085 16:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:32.085 16:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:32.085 16:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:32.085 16:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.085 16:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.085 malloc3 00:10:32.085 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.085 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:32.085 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.085 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.085 [2024-11-05 16:24:45.011605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:32.085 [2024-11-05 16:24:45.011767] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:32.085 [2024-11-05 16:24:45.011851] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:32.085 [2024-11-05 16:24:45.011920] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:32.085 [2024-11-05 16:24:45.015046] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:32.085 [2024-11-05 16:24:45.015175] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:32.085 pt3 00:10:32.085 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.085 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:32.085 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:32.085 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:32.085 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.085 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.085 [2024-11-05 16:24:45.023631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:32.085 [2024-11-05 16:24:45.025842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:32.085 [2024-11-05 16:24:45.025968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:32.085 [2024-11-05 16:24:45.026192] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:32.085 [2024-11-05 16:24:45.026249] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:32.085 [2024-11-05 16:24:45.026616] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:32.085 [2024-11-05 16:24:45.026865] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:32.085 [2024-11-05 16:24:45.026915] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:32.085 [2024-11-05 16:24:45.027163] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:32.085 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.085 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:32.085 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:32.085 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:32.085 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.085 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.085 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:32.085 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.085 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.085 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.085 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.085 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.085 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.085 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.085 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:32.085 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.085 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.085 "name": "raid_bdev1", 00:10:32.085 "uuid": "a5cc7dfb-1c2c-435d-be93-1cd83d8098d8", 00:10:32.085 "strip_size_kb": 64, 00:10:32.085 "state": "online", 00:10:32.085 "raid_level": "raid0", 00:10:32.085 "superblock": true, 00:10:32.085 "num_base_bdevs": 3, 00:10:32.085 "num_base_bdevs_discovered": 3, 00:10:32.085 "num_base_bdevs_operational": 3, 00:10:32.085 "base_bdevs_list": [ 00:10:32.085 { 00:10:32.085 "name": "pt1", 00:10:32.085 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:32.085 "is_configured": true, 00:10:32.085 "data_offset": 2048, 00:10:32.085 "data_size": 63488 00:10:32.085 }, 00:10:32.085 { 00:10:32.085 "name": "pt2", 00:10:32.085 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:32.085 "is_configured": true, 00:10:32.085 "data_offset": 2048, 00:10:32.085 "data_size": 63488 00:10:32.085 }, 00:10:32.085 { 00:10:32.085 "name": "pt3", 00:10:32.085 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:32.085 "is_configured": true, 00:10:32.085 "data_offset": 2048, 00:10:32.085 "data_size": 63488 00:10:32.085 } 00:10:32.085 ] 00:10:32.085 }' 00:10:32.085 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.085 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.653 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:32.653 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:32.653 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:32.653 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:32.653 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:32.653 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:32.653 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:32.653 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.653 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.653 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:32.653 [2024-11-05 16:24:45.559046] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:32.653 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.653 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:32.653 "name": "raid_bdev1", 00:10:32.653 "aliases": [ 00:10:32.653 "a5cc7dfb-1c2c-435d-be93-1cd83d8098d8" 00:10:32.653 ], 00:10:32.653 "product_name": "Raid Volume", 00:10:32.653 "block_size": 512, 00:10:32.653 "num_blocks": 190464, 00:10:32.653 "uuid": "a5cc7dfb-1c2c-435d-be93-1cd83d8098d8", 00:10:32.653 "assigned_rate_limits": { 00:10:32.653 "rw_ios_per_sec": 0, 00:10:32.653 "rw_mbytes_per_sec": 0, 00:10:32.653 "r_mbytes_per_sec": 0, 00:10:32.653 "w_mbytes_per_sec": 0 00:10:32.653 }, 00:10:32.653 "claimed": false, 00:10:32.653 "zoned": false, 00:10:32.653 "supported_io_types": { 00:10:32.653 "read": true, 00:10:32.653 "write": true, 00:10:32.653 "unmap": true, 00:10:32.653 "flush": true, 00:10:32.653 "reset": true, 00:10:32.653 "nvme_admin": false, 00:10:32.653 "nvme_io": false, 00:10:32.653 "nvme_io_md": false, 00:10:32.653 "write_zeroes": true, 00:10:32.653 "zcopy": false, 00:10:32.653 "get_zone_info": false, 00:10:32.653 "zone_management": false, 00:10:32.653 "zone_append": false, 00:10:32.653 "compare": false, 00:10:32.653 "compare_and_write": false, 00:10:32.653 "abort": false, 00:10:32.653 "seek_hole": false, 00:10:32.653 "seek_data": false, 00:10:32.653 "copy": false, 00:10:32.653 "nvme_iov_md": false 00:10:32.653 }, 00:10:32.653 "memory_domains": [ 00:10:32.653 { 00:10:32.653 "dma_device_id": "system", 00:10:32.653 "dma_device_type": 1 00:10:32.653 }, 00:10:32.653 { 00:10:32.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.653 "dma_device_type": 2 00:10:32.653 }, 00:10:32.653 { 00:10:32.653 "dma_device_id": "system", 00:10:32.653 "dma_device_type": 1 00:10:32.653 }, 00:10:32.653 { 00:10:32.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.653 "dma_device_type": 2 00:10:32.653 }, 00:10:32.653 { 00:10:32.653 "dma_device_id": "system", 00:10:32.653 "dma_device_type": 1 00:10:32.653 }, 00:10:32.653 { 00:10:32.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.653 "dma_device_type": 2 00:10:32.653 } 00:10:32.653 ], 00:10:32.653 "driver_specific": { 00:10:32.653 "raid": { 00:10:32.653 "uuid": "a5cc7dfb-1c2c-435d-be93-1cd83d8098d8", 00:10:32.653 "strip_size_kb": 64, 00:10:32.653 "state": "online", 00:10:32.653 "raid_level": "raid0", 00:10:32.653 "superblock": true, 00:10:32.653 "num_base_bdevs": 3, 00:10:32.653 "num_base_bdevs_discovered": 3, 00:10:32.653 "num_base_bdevs_operational": 3, 00:10:32.653 "base_bdevs_list": [ 00:10:32.653 { 00:10:32.653 "name": "pt1", 00:10:32.653 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:32.653 "is_configured": true, 00:10:32.653 "data_offset": 2048, 00:10:32.653 "data_size": 63488 00:10:32.653 }, 00:10:32.653 { 00:10:32.653 "name": "pt2", 00:10:32.653 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:32.653 "is_configured": true, 00:10:32.653 "data_offset": 2048, 00:10:32.653 "data_size": 63488 00:10:32.653 }, 00:10:32.653 { 00:10:32.653 "name": "pt3", 00:10:32.654 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:32.654 "is_configured": true, 00:10:32.654 "data_offset": 2048, 00:10:32.654 "data_size": 63488 00:10:32.654 } 00:10:32.654 ] 00:10:32.654 } 00:10:32.654 } 00:10:32.654 }' 00:10:32.654 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:32.654 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:32.654 pt2 00:10:32.654 pt3' 00:10:32.654 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.654 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:32.654 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.654 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:32.654 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.654 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.654 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.654 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.913 [2024-11-05 16:24:45.866481] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a5cc7dfb-1c2c-435d-be93-1cd83d8098d8 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a5cc7dfb-1c2c-435d-be93-1cd83d8098d8 ']' 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.913 [2024-11-05 16:24:45.898094] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:32.913 [2024-11-05 16:24:45.898173] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:32.913 [2024-11-05 16:24:45.898275] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:32.913 [2024-11-05 16:24:45.898365] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:32.913 [2024-11-05 16:24:45.898377] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.913 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.914 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:32.914 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:32.914 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.914 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.914 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.914 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:32.914 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.914 16:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.914 16:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:33.173 16:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.173 16:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:33.173 16:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:33.173 16:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:33.173 16:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:33.173 16:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:33.173 16:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:33.173 16:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:33.173 16:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:33.173 16:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:33.173 16:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.173 16:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.173 [2024-11-05 16:24:46.053907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:33.173 [2024-11-05 16:24:46.055992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:33.173 [2024-11-05 16:24:46.056110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:33.173 [2024-11-05 16:24:46.056172] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:33.173 [2024-11-05 16:24:46.056230] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:33.173 [2024-11-05 16:24:46.056252] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:33.173 [2024-11-05 16:24:46.056271] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:33.173 [2024-11-05 16:24:46.056284] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:33.173 request: 00:10:33.173 { 00:10:33.173 "name": "raid_bdev1", 00:10:33.173 "raid_level": "raid0", 00:10:33.173 "base_bdevs": [ 00:10:33.173 "malloc1", 00:10:33.173 "malloc2", 00:10:33.173 "malloc3" 00:10:33.173 ], 00:10:33.173 "strip_size_kb": 64, 00:10:33.173 "superblock": false, 00:10:33.173 "method": "bdev_raid_create", 00:10:33.173 "req_id": 1 00:10:33.173 } 00:10:33.173 Got JSON-RPC error response 00:10:33.173 response: 00:10:33.173 { 00:10:33.173 "code": -17, 00:10:33.173 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:33.173 } 00:10:33.173 16:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:33.173 16:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:33.173 16:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:33.173 16:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:33.173 16:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:33.173 16:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.173 16:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:33.173 16:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.173 16:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.173 16:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.173 16:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:33.173 16:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:33.173 16:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:33.173 16:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.173 16:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.173 [2024-11-05 16:24:46.117765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:33.173 [2024-11-05 16:24:46.117894] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.173 [2024-11-05 16:24:46.117939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:33.173 [2024-11-05 16:24:46.117986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.173 [2024-11-05 16:24:46.120523] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.173 [2024-11-05 16:24:46.120620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:33.173 [2024-11-05 16:24:46.120766] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:33.173 [2024-11-05 16:24:46.120871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:33.173 pt1 00:10:33.173 16:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.173 16:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:33.173 16:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:33.173 16:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.173 16:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:33.174 16:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.174 16:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:33.174 16:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.174 16:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.174 16:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.174 16:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.174 16:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.174 16:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.174 16:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.174 16:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:33.174 16:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.174 16:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.174 "name": "raid_bdev1", 00:10:33.174 "uuid": "a5cc7dfb-1c2c-435d-be93-1cd83d8098d8", 00:10:33.174 "strip_size_kb": 64, 00:10:33.174 "state": "configuring", 00:10:33.174 "raid_level": "raid0", 00:10:33.174 "superblock": true, 00:10:33.174 "num_base_bdevs": 3, 00:10:33.174 "num_base_bdevs_discovered": 1, 00:10:33.174 "num_base_bdevs_operational": 3, 00:10:33.174 "base_bdevs_list": [ 00:10:33.174 { 00:10:33.174 "name": "pt1", 00:10:33.174 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:33.174 "is_configured": true, 00:10:33.174 "data_offset": 2048, 00:10:33.174 "data_size": 63488 00:10:33.174 }, 00:10:33.174 { 00:10:33.174 "name": null, 00:10:33.174 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:33.174 "is_configured": false, 00:10:33.174 "data_offset": 2048, 00:10:33.174 "data_size": 63488 00:10:33.174 }, 00:10:33.174 { 00:10:33.174 "name": null, 00:10:33.174 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:33.174 "is_configured": false, 00:10:33.174 "data_offset": 2048, 00:10:33.174 "data_size": 63488 00:10:33.174 } 00:10:33.174 ] 00:10:33.174 }' 00:10:33.174 16:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.174 16:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.743 16:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:33.743 16:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:33.743 16:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.743 16:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.743 [2024-11-05 16:24:46.573007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:33.743 [2024-11-05 16:24:46.573089] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.743 [2024-11-05 16:24:46.573114] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:33.743 [2024-11-05 16:24:46.573124] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.743 [2024-11-05 16:24:46.573656] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.743 [2024-11-05 16:24:46.573684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:33.743 [2024-11-05 16:24:46.573790] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:33.743 [2024-11-05 16:24:46.573815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:33.743 pt2 00:10:33.743 16:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.743 16:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:33.743 16:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.743 16:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.743 [2024-11-05 16:24:46.585003] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:33.743 16:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.743 16:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:33.743 16:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:33.743 16:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.743 16:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:33.743 16:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.743 16:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:33.743 16:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.743 16:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.743 16:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.743 16:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.743 16:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:33.743 16:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.743 16:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.743 16:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.743 16:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.743 16:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.743 "name": "raid_bdev1", 00:10:33.743 "uuid": "a5cc7dfb-1c2c-435d-be93-1cd83d8098d8", 00:10:33.743 "strip_size_kb": 64, 00:10:33.743 "state": "configuring", 00:10:33.743 "raid_level": "raid0", 00:10:33.743 "superblock": true, 00:10:33.743 "num_base_bdevs": 3, 00:10:33.743 "num_base_bdevs_discovered": 1, 00:10:33.743 "num_base_bdevs_operational": 3, 00:10:33.743 "base_bdevs_list": [ 00:10:33.743 { 00:10:33.743 "name": "pt1", 00:10:33.743 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:33.743 "is_configured": true, 00:10:33.743 "data_offset": 2048, 00:10:33.743 "data_size": 63488 00:10:33.743 }, 00:10:33.743 { 00:10:33.743 "name": null, 00:10:33.743 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:33.743 "is_configured": false, 00:10:33.743 "data_offset": 0, 00:10:33.743 "data_size": 63488 00:10:33.743 }, 00:10:33.743 { 00:10:33.743 "name": null, 00:10:33.743 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:33.743 "is_configured": false, 00:10:33.743 "data_offset": 2048, 00:10:33.743 "data_size": 63488 00:10:33.743 } 00:10:33.743 ] 00:10:33.743 }' 00:10:33.743 16:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.743 16:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.003 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:34.003 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:34.003 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:34.003 16:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.003 16:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.003 [2024-11-05 16:24:47.060245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:34.003 [2024-11-05 16:24:47.060393] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.003 [2024-11-05 16:24:47.060471] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:34.003 [2024-11-05 16:24:47.060596] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.003 [2024-11-05 16:24:47.061172] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.003 [2024-11-05 16:24:47.061249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:34.003 [2024-11-05 16:24:47.061379] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:34.003 [2024-11-05 16:24:47.061440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:34.003 pt2 00:10:34.003 16:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.003 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:34.003 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:34.003 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:34.003 16:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.003 16:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.003 [2024-11-05 16:24:47.072200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:34.003 [2024-11-05 16:24:47.072309] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.003 [2024-11-05 16:24:47.072353] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:34.003 [2024-11-05 16:24:47.072388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.003 [2024-11-05 16:24:47.072944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.003 [2024-11-05 16:24:47.073022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:34.003 [2024-11-05 16:24:47.073113] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:34.003 [2024-11-05 16:24:47.073141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:34.003 [2024-11-05 16:24:47.073286] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:34.003 [2024-11-05 16:24:47.073300] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:34.003 [2024-11-05 16:24:47.073601] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:34.003 [2024-11-05 16:24:47.073775] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:34.003 [2024-11-05 16:24:47.073785] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:34.003 [2024-11-05 16:24:47.073935] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:34.003 pt3 00:10:34.003 16:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.003 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:34.003 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:34.003 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:34.003 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:34.003 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:34.003 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.003 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.003 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:34.003 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.003 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.003 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.003 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.003 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.003 16:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.003 16:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.003 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:34.263 16:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.263 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.263 "name": "raid_bdev1", 00:10:34.263 "uuid": "a5cc7dfb-1c2c-435d-be93-1cd83d8098d8", 00:10:34.263 "strip_size_kb": 64, 00:10:34.263 "state": "online", 00:10:34.263 "raid_level": "raid0", 00:10:34.263 "superblock": true, 00:10:34.263 "num_base_bdevs": 3, 00:10:34.263 "num_base_bdevs_discovered": 3, 00:10:34.263 "num_base_bdevs_operational": 3, 00:10:34.263 "base_bdevs_list": [ 00:10:34.263 { 00:10:34.263 "name": "pt1", 00:10:34.263 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:34.263 "is_configured": true, 00:10:34.263 "data_offset": 2048, 00:10:34.263 "data_size": 63488 00:10:34.263 }, 00:10:34.263 { 00:10:34.263 "name": "pt2", 00:10:34.263 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:34.263 "is_configured": true, 00:10:34.263 "data_offset": 2048, 00:10:34.263 "data_size": 63488 00:10:34.263 }, 00:10:34.263 { 00:10:34.263 "name": "pt3", 00:10:34.263 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:34.263 "is_configured": true, 00:10:34.263 "data_offset": 2048, 00:10:34.263 "data_size": 63488 00:10:34.263 } 00:10:34.263 ] 00:10:34.263 }' 00:10:34.263 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.263 16:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.522 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:34.522 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:34.522 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:34.522 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:34.522 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:34.522 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:34.522 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:34.522 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:34.522 16:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.522 16:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.522 [2024-11-05 16:24:47.583749] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:34.522 16:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.781 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:34.781 "name": "raid_bdev1", 00:10:34.781 "aliases": [ 00:10:34.781 "a5cc7dfb-1c2c-435d-be93-1cd83d8098d8" 00:10:34.781 ], 00:10:34.781 "product_name": "Raid Volume", 00:10:34.781 "block_size": 512, 00:10:34.781 "num_blocks": 190464, 00:10:34.781 "uuid": "a5cc7dfb-1c2c-435d-be93-1cd83d8098d8", 00:10:34.781 "assigned_rate_limits": { 00:10:34.781 "rw_ios_per_sec": 0, 00:10:34.781 "rw_mbytes_per_sec": 0, 00:10:34.781 "r_mbytes_per_sec": 0, 00:10:34.781 "w_mbytes_per_sec": 0 00:10:34.781 }, 00:10:34.781 "claimed": false, 00:10:34.781 "zoned": false, 00:10:34.781 "supported_io_types": { 00:10:34.781 "read": true, 00:10:34.781 "write": true, 00:10:34.781 "unmap": true, 00:10:34.781 "flush": true, 00:10:34.781 "reset": true, 00:10:34.781 "nvme_admin": false, 00:10:34.781 "nvme_io": false, 00:10:34.781 "nvme_io_md": false, 00:10:34.781 "write_zeroes": true, 00:10:34.781 "zcopy": false, 00:10:34.781 "get_zone_info": false, 00:10:34.781 "zone_management": false, 00:10:34.781 "zone_append": false, 00:10:34.781 "compare": false, 00:10:34.781 "compare_and_write": false, 00:10:34.781 "abort": false, 00:10:34.781 "seek_hole": false, 00:10:34.781 "seek_data": false, 00:10:34.781 "copy": false, 00:10:34.781 "nvme_iov_md": false 00:10:34.781 }, 00:10:34.781 "memory_domains": [ 00:10:34.781 { 00:10:34.781 "dma_device_id": "system", 00:10:34.781 "dma_device_type": 1 00:10:34.781 }, 00:10:34.781 { 00:10:34.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.781 "dma_device_type": 2 00:10:34.781 }, 00:10:34.781 { 00:10:34.781 "dma_device_id": "system", 00:10:34.781 "dma_device_type": 1 00:10:34.781 }, 00:10:34.781 { 00:10:34.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.781 "dma_device_type": 2 00:10:34.781 }, 00:10:34.781 { 00:10:34.781 "dma_device_id": "system", 00:10:34.781 "dma_device_type": 1 00:10:34.781 }, 00:10:34.781 { 00:10:34.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.781 "dma_device_type": 2 00:10:34.781 } 00:10:34.781 ], 00:10:34.781 "driver_specific": { 00:10:34.781 "raid": { 00:10:34.781 "uuid": "a5cc7dfb-1c2c-435d-be93-1cd83d8098d8", 00:10:34.781 "strip_size_kb": 64, 00:10:34.781 "state": "online", 00:10:34.781 "raid_level": "raid0", 00:10:34.781 "superblock": true, 00:10:34.781 "num_base_bdevs": 3, 00:10:34.781 "num_base_bdevs_discovered": 3, 00:10:34.781 "num_base_bdevs_operational": 3, 00:10:34.781 "base_bdevs_list": [ 00:10:34.781 { 00:10:34.781 "name": "pt1", 00:10:34.781 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:34.781 "is_configured": true, 00:10:34.781 "data_offset": 2048, 00:10:34.781 "data_size": 63488 00:10:34.781 }, 00:10:34.781 { 00:10:34.781 "name": "pt2", 00:10:34.781 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:34.781 "is_configured": true, 00:10:34.781 "data_offset": 2048, 00:10:34.781 "data_size": 63488 00:10:34.781 }, 00:10:34.781 { 00:10:34.781 "name": "pt3", 00:10:34.781 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:34.781 "is_configured": true, 00:10:34.781 "data_offset": 2048, 00:10:34.781 "data_size": 63488 00:10:34.781 } 00:10:34.781 ] 00:10:34.781 } 00:10:34.781 } 00:10:34.781 }' 00:10:34.781 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:34.781 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:34.781 pt2 00:10:34.781 pt3' 00:10:34.781 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.781 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:34.781 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.781 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.781 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:34.781 16:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.781 16:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.781 16:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.781 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.781 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.781 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.781 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:34.781 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.781 16:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.781 16:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.781 16:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.781 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.781 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.781 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.781 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.781 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:34.782 16:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.782 16:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.782 16:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.042 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.042 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.042 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:35.042 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:35.042 16:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.042 16:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.042 [2024-11-05 16:24:47.887231] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:35.042 16:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.042 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a5cc7dfb-1c2c-435d-be93-1cd83d8098d8 '!=' a5cc7dfb-1c2c-435d-be93-1cd83d8098d8 ']' 00:10:35.042 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:35.042 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:35.042 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:35.042 16:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65309 00:10:35.042 16:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 65309 ']' 00:10:35.042 16:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 65309 00:10:35.042 16:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:10:35.042 16:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:35.042 16:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65309 00:10:35.042 16:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:35.042 16:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:35.042 16:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65309' 00:10:35.042 killing process with pid 65309 00:10:35.042 16:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 65309 00:10:35.042 [2024-11-05 16:24:47.956462] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:35.042 [2024-11-05 16:24:47.956664] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:35.042 16:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 65309 00:10:35.042 [2024-11-05 16:24:47.956806] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:35.042 [2024-11-05 16:24:47.956865] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:35.302 [2024-11-05 16:24:48.296766] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:36.727 16:24:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:36.728 00:10:36.728 real 0m5.619s 00:10:36.728 user 0m8.122s 00:10:36.728 sys 0m0.876s 00:10:36.728 16:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:36.728 16:24:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.728 ************************************ 00:10:36.728 END TEST raid_superblock_test 00:10:36.728 ************************************ 00:10:36.728 16:24:49 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:10:36.728 16:24:49 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:36.728 16:24:49 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:36.728 16:24:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:36.728 ************************************ 00:10:36.728 START TEST raid_read_error_test 00:10:36.728 ************************************ 00:10:36.728 16:24:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 3 read 00:10:36.728 16:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:36.728 16:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:36.728 16:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:36.728 16:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:36.728 16:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:36.728 16:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:36.728 16:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:36.728 16:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:36.728 16:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:36.728 16:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:36.728 16:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:36.728 16:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:36.728 16:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:36.728 16:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:36.728 16:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:36.728 16:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:36.728 16:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:36.728 16:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:36.728 16:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:36.728 16:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:36.728 16:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:36.728 16:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:36.728 16:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:36.728 16:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:36.728 16:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:36.728 16:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.fmstaab8cK 00:10:36.728 16:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65572 00:10:36.728 16:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:36.728 16:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65572 00:10:36.728 16:24:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 65572 ']' 00:10:36.728 16:24:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.728 16:24:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:36.728 16:24:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.728 16:24:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:36.728 16:24:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.728 [2024-11-05 16:24:49.665237] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:10:36.728 [2024-11-05 16:24:49.665378] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65572 ] 00:10:36.987 [2024-11-05 16:24:49.840767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.987 [2024-11-05 16:24:49.959692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.247 [2024-11-05 16:24:50.179795] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:37.247 [2024-11-05 16:24:50.179869] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:37.506 16:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:37.506 16:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:10:37.506 16:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:37.506 16:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:37.506 16:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.506 16:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.506 BaseBdev1_malloc 00:10:37.506 16:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.506 16:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:37.506 16:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.506 16:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.506 true 00:10:37.506 16:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.506 16:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:37.506 16:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.506 16:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.506 [2024-11-05 16:24:50.594696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:37.506 [2024-11-05 16:24:50.594754] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.506 [2024-11-05 16:24:50.594792] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:37.506 [2024-11-05 16:24:50.594804] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.766 [2024-11-05 16:24:50.597259] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.766 [2024-11-05 16:24:50.597304] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:37.766 BaseBdev1 00:10:37.766 16:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.766 16:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:37.766 16:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:37.766 16:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.766 16:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.766 BaseBdev2_malloc 00:10:37.766 16:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.766 16:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:37.766 16:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.766 16:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.766 true 00:10:37.766 16:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.766 16:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:37.766 16:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.766 16:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.766 [2024-11-05 16:24:50.664173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:37.766 [2024-11-05 16:24:50.664234] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.766 [2024-11-05 16:24:50.664255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:37.766 [2024-11-05 16:24:50.664267] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.766 [2024-11-05 16:24:50.666745] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.766 [2024-11-05 16:24:50.666778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:37.766 BaseBdev2 00:10:37.766 16:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.766 16:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:37.766 16:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:37.766 16:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.766 16:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.766 BaseBdev3_malloc 00:10:37.766 16:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.766 16:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:37.766 16:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.766 16:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.766 true 00:10:37.766 16:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.766 16:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:37.766 16:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.766 16:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.767 [2024-11-05 16:24:50.749982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:37.767 [2024-11-05 16:24:50.750043] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.767 [2024-11-05 16:24:50.750065] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:37.767 [2024-11-05 16:24:50.750077] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.767 [2024-11-05 16:24:50.752430] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.767 [2024-11-05 16:24:50.752493] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:37.767 BaseBdev3 00:10:37.767 16:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.767 16:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:37.767 16:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.767 16:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.767 [2024-11-05 16:24:50.762044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:37.767 [2024-11-05 16:24:50.764040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:37.767 [2024-11-05 16:24:50.764191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:37.767 [2024-11-05 16:24:50.764436] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:37.767 [2024-11-05 16:24:50.764466] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:37.767 [2024-11-05 16:24:50.764802] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:37.767 [2024-11-05 16:24:50.765002] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:37.767 [2024-11-05 16:24:50.765018] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:37.767 [2024-11-05 16:24:50.765202] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:37.767 16:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.767 16:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:37.767 16:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:37.767 16:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:37.767 16:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:37.767 16:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.767 16:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:37.767 16:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.767 16:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.767 16:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.767 16:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.767 16:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.767 16:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:37.767 16:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.767 16:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.767 16:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.767 16:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.767 "name": "raid_bdev1", 00:10:37.767 "uuid": "1288d107-d2d5-4c4e-9d3b-cdca8b62ecc1", 00:10:37.767 "strip_size_kb": 64, 00:10:37.767 "state": "online", 00:10:37.767 "raid_level": "raid0", 00:10:37.767 "superblock": true, 00:10:37.767 "num_base_bdevs": 3, 00:10:37.767 "num_base_bdevs_discovered": 3, 00:10:37.767 "num_base_bdevs_operational": 3, 00:10:37.767 "base_bdevs_list": [ 00:10:37.767 { 00:10:37.767 "name": "BaseBdev1", 00:10:37.767 "uuid": "394eaf9a-daa4-5283-9f09-61186e5ff1f9", 00:10:37.767 "is_configured": true, 00:10:37.767 "data_offset": 2048, 00:10:37.767 "data_size": 63488 00:10:37.767 }, 00:10:37.767 { 00:10:37.767 "name": "BaseBdev2", 00:10:37.767 "uuid": "418af0ec-89cc-5535-ad09-df66089d3061", 00:10:37.767 "is_configured": true, 00:10:37.767 "data_offset": 2048, 00:10:37.767 "data_size": 63488 00:10:37.767 }, 00:10:37.767 { 00:10:37.767 "name": "BaseBdev3", 00:10:37.767 "uuid": "6ced84af-cc6a-589b-9235-39d5988ac67d", 00:10:37.767 "is_configured": true, 00:10:37.767 "data_offset": 2048, 00:10:37.767 "data_size": 63488 00:10:37.767 } 00:10:37.767 ] 00:10:37.767 }' 00:10:37.767 16:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.767 16:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.336 16:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:38.336 16:24:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:38.336 [2024-11-05 16:24:51.278551] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:39.275 16:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:39.275 16:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.275 16:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.275 16:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.275 16:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:39.275 16:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:39.275 16:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:39.275 16:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:39.275 16:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:39.275 16:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:39.275 16:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:39.275 16:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.275 16:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:39.275 16:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.275 16:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.275 16:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.275 16:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.275 16:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:39.275 16:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.275 16:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.275 16:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.275 16:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.275 16:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.275 "name": "raid_bdev1", 00:10:39.275 "uuid": "1288d107-d2d5-4c4e-9d3b-cdca8b62ecc1", 00:10:39.275 "strip_size_kb": 64, 00:10:39.275 "state": "online", 00:10:39.275 "raid_level": "raid0", 00:10:39.275 "superblock": true, 00:10:39.275 "num_base_bdevs": 3, 00:10:39.275 "num_base_bdevs_discovered": 3, 00:10:39.275 "num_base_bdevs_operational": 3, 00:10:39.275 "base_bdevs_list": [ 00:10:39.275 { 00:10:39.275 "name": "BaseBdev1", 00:10:39.275 "uuid": "394eaf9a-daa4-5283-9f09-61186e5ff1f9", 00:10:39.275 "is_configured": true, 00:10:39.275 "data_offset": 2048, 00:10:39.275 "data_size": 63488 00:10:39.275 }, 00:10:39.275 { 00:10:39.275 "name": "BaseBdev2", 00:10:39.275 "uuid": "418af0ec-89cc-5535-ad09-df66089d3061", 00:10:39.275 "is_configured": true, 00:10:39.275 "data_offset": 2048, 00:10:39.275 "data_size": 63488 00:10:39.275 }, 00:10:39.275 { 00:10:39.275 "name": "BaseBdev3", 00:10:39.275 "uuid": "6ced84af-cc6a-589b-9235-39d5988ac67d", 00:10:39.275 "is_configured": true, 00:10:39.275 "data_offset": 2048, 00:10:39.275 "data_size": 63488 00:10:39.275 } 00:10:39.275 ] 00:10:39.275 }' 00:10:39.275 16:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.275 16:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.845 16:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:39.845 16:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.845 16:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.845 [2024-11-05 16:24:52.659508] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:39.845 [2024-11-05 16:24:52.659635] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:39.845 [2024-11-05 16:24:52.662697] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:39.845 [2024-11-05 16:24:52.662784] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:39.845 [2024-11-05 16:24:52.662851] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:39.845 [2024-11-05 16:24:52.662895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:39.845 16:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.845 { 00:10:39.845 "results": [ 00:10:39.845 { 00:10:39.845 "job": "raid_bdev1", 00:10:39.845 "core_mask": "0x1", 00:10:39.845 "workload": "randrw", 00:10:39.845 "percentage": 50, 00:10:39.845 "status": "finished", 00:10:39.845 "queue_depth": 1, 00:10:39.845 "io_size": 131072, 00:10:39.845 "runtime": 1.381773, 00:10:39.845 "iops": 14474.157477385937, 00:10:39.845 "mibps": 1809.2696846732422, 00:10:39.845 "io_failed": 1, 00:10:39.845 "io_timeout": 0, 00:10:39.845 "avg_latency_us": 95.9584789319486, 00:10:39.845 "min_latency_us": 24.146724890829695, 00:10:39.845 "max_latency_us": 1645.5545851528384 00:10:39.845 } 00:10:39.845 ], 00:10:39.845 "core_count": 1 00:10:39.845 } 00:10:39.845 16:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65572 00:10:39.845 16:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 65572 ']' 00:10:39.845 16:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 65572 00:10:39.845 16:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:10:39.845 16:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:39.845 16:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65572 00:10:39.845 16:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:39.845 16:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:39.845 16:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65572' 00:10:39.845 killing process with pid 65572 00:10:39.845 16:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 65572 00:10:39.845 [2024-11-05 16:24:52.709223] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:39.845 16:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 65572 00:10:40.176 [2024-11-05 16:24:52.948585] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:41.114 16:24:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.fmstaab8cK 00:10:41.114 16:24:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:41.114 16:24:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:41.373 16:24:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:41.373 16:24:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:41.373 16:24:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:41.373 16:24:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:41.373 16:24:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:41.373 00:10:41.373 real 0m4.657s 00:10:41.373 user 0m5.555s 00:10:41.374 sys 0m0.530s 00:10:41.374 16:24:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:41.374 16:24:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.374 ************************************ 00:10:41.374 END TEST raid_read_error_test 00:10:41.374 ************************************ 00:10:41.374 16:24:54 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:10:41.374 16:24:54 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:41.374 16:24:54 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:41.374 16:24:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:41.374 ************************************ 00:10:41.374 START TEST raid_write_error_test 00:10:41.374 ************************************ 00:10:41.374 16:24:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 3 write 00:10:41.374 16:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:41.374 16:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:41.374 16:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:41.374 16:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:41.374 16:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:41.374 16:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:41.374 16:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:41.374 16:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:41.374 16:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:41.374 16:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:41.374 16:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:41.374 16:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:41.374 16:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:41.374 16:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:41.374 16:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:41.374 16:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:41.374 16:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:41.374 16:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:41.374 16:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:41.374 16:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:41.374 16:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:41.374 16:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:41.374 16:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:41.374 16:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:41.374 16:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:41.374 16:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.YXjnhYszjG 00:10:41.374 16:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:41.374 16:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65713 00:10:41.374 16:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65713 00:10:41.374 16:24:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 65713 ']' 00:10:41.374 16:24:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.374 16:24:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:41.374 16:24:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.374 16:24:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:41.374 16:24:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.374 [2024-11-05 16:24:54.398946] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:10:41.374 [2024-11-05 16:24:54.399299] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65713 ] 00:10:41.634 [2024-11-05 16:24:54.581538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.634 [2024-11-05 16:24:54.704917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.893 [2024-11-05 16:24:54.917206] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:41.893 [2024-11-05 16:24:54.917371] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:42.460 16:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.461 BaseBdev1_malloc 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.461 true 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.461 [2024-11-05 16:24:55.360779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:42.461 [2024-11-05 16:24:55.360847] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:42.461 [2024-11-05 16:24:55.360871] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:42.461 [2024-11-05 16:24:55.360883] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:42.461 [2024-11-05 16:24:55.363254] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:42.461 [2024-11-05 16:24:55.363316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:42.461 BaseBdev1 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.461 BaseBdev2_malloc 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.461 true 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.461 [2024-11-05 16:24:55.434327] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:42.461 [2024-11-05 16:24:55.434389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:42.461 [2024-11-05 16:24:55.434409] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:42.461 [2024-11-05 16:24:55.434421] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:42.461 [2024-11-05 16:24:55.437029] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:42.461 [2024-11-05 16:24:55.437120] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:42.461 BaseBdev2 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.461 BaseBdev3_malloc 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.461 true 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.461 [2024-11-05 16:24:55.512845] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:42.461 [2024-11-05 16:24:55.512905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:42.461 [2024-11-05 16:24:55.512923] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:42.461 [2024-11-05 16:24:55.512934] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:42.461 [2024-11-05 16:24:55.515183] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:42.461 [2024-11-05 16:24:55.515223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:42.461 BaseBdev3 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.461 [2024-11-05 16:24:55.524908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:42.461 [2024-11-05 16:24:55.526899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:42.461 [2024-11-05 16:24:55.526990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:42.461 [2024-11-05 16:24:55.527216] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:42.461 [2024-11-05 16:24:55.527230] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:42.461 [2024-11-05 16:24:55.527489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:42.461 [2024-11-05 16:24:55.527669] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:42.461 [2024-11-05 16:24:55.527684] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:42.461 [2024-11-05 16:24:55.527839] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.461 16:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.720 16:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.720 16:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.720 "name": "raid_bdev1", 00:10:42.720 "uuid": "0972366e-c888-458e-bf8e-6b3c2e62de68", 00:10:42.720 "strip_size_kb": 64, 00:10:42.720 "state": "online", 00:10:42.720 "raid_level": "raid0", 00:10:42.720 "superblock": true, 00:10:42.720 "num_base_bdevs": 3, 00:10:42.720 "num_base_bdevs_discovered": 3, 00:10:42.720 "num_base_bdevs_operational": 3, 00:10:42.720 "base_bdevs_list": [ 00:10:42.720 { 00:10:42.720 "name": "BaseBdev1", 00:10:42.720 "uuid": "35281cca-1543-559a-8be2-b8e7da3579ff", 00:10:42.720 "is_configured": true, 00:10:42.720 "data_offset": 2048, 00:10:42.720 "data_size": 63488 00:10:42.720 }, 00:10:42.720 { 00:10:42.720 "name": "BaseBdev2", 00:10:42.720 "uuid": "6d86e3f7-0cf9-5179-8616-12a3eb238f06", 00:10:42.720 "is_configured": true, 00:10:42.720 "data_offset": 2048, 00:10:42.720 "data_size": 63488 00:10:42.720 }, 00:10:42.720 { 00:10:42.720 "name": "BaseBdev3", 00:10:42.720 "uuid": "91c6578b-1ddd-5709-a5e4-2d56324e9e68", 00:10:42.720 "is_configured": true, 00:10:42.720 "data_offset": 2048, 00:10:42.720 "data_size": 63488 00:10:42.720 } 00:10:42.720 ] 00:10:42.720 }' 00:10:42.720 16:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.720 16:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.978 16:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:42.978 16:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:43.237 [2024-11-05 16:24:56.097433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:44.171 16:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:44.171 16:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.171 16:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.171 16:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.171 16:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:44.171 16:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:44.171 16:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:44.171 16:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:44.171 16:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:44.171 16:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:44.171 16:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.171 16:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.171 16:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:44.171 16:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.171 16:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.171 16:24:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.171 16:24:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.171 16:24:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.171 16:24:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.171 16:24:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.171 16:24:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:44.171 16:24:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.171 16:24:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.171 "name": "raid_bdev1", 00:10:44.171 "uuid": "0972366e-c888-458e-bf8e-6b3c2e62de68", 00:10:44.171 "strip_size_kb": 64, 00:10:44.171 "state": "online", 00:10:44.171 "raid_level": "raid0", 00:10:44.171 "superblock": true, 00:10:44.171 "num_base_bdevs": 3, 00:10:44.171 "num_base_bdevs_discovered": 3, 00:10:44.171 "num_base_bdevs_operational": 3, 00:10:44.171 "base_bdevs_list": [ 00:10:44.171 { 00:10:44.171 "name": "BaseBdev1", 00:10:44.171 "uuid": "35281cca-1543-559a-8be2-b8e7da3579ff", 00:10:44.171 "is_configured": true, 00:10:44.171 "data_offset": 2048, 00:10:44.171 "data_size": 63488 00:10:44.171 }, 00:10:44.171 { 00:10:44.171 "name": "BaseBdev2", 00:10:44.171 "uuid": "6d86e3f7-0cf9-5179-8616-12a3eb238f06", 00:10:44.171 "is_configured": true, 00:10:44.171 "data_offset": 2048, 00:10:44.171 "data_size": 63488 00:10:44.171 }, 00:10:44.171 { 00:10:44.171 "name": "BaseBdev3", 00:10:44.171 "uuid": "91c6578b-1ddd-5709-a5e4-2d56324e9e68", 00:10:44.171 "is_configured": true, 00:10:44.171 "data_offset": 2048, 00:10:44.171 "data_size": 63488 00:10:44.171 } 00:10:44.171 ] 00:10:44.171 }' 00:10:44.171 16:24:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.171 16:24:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.429 16:24:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:44.429 16:24:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.429 16:24:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.430 [2024-11-05 16:24:57.417638] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:44.430 [2024-11-05 16:24:57.417675] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:44.430 [2024-11-05 16:24:57.420734] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:44.430 [2024-11-05 16:24:57.420825] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:44.430 [2024-11-05 16:24:57.420890] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:44.430 [2024-11-05 16:24:57.420937] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:44.430 { 00:10:44.430 "results": [ 00:10:44.430 { 00:10:44.430 "job": "raid_bdev1", 00:10:44.430 "core_mask": "0x1", 00:10:44.430 "workload": "randrw", 00:10:44.430 "percentage": 50, 00:10:44.430 "status": "finished", 00:10:44.430 "queue_depth": 1, 00:10:44.430 "io_size": 131072, 00:10:44.430 "runtime": 1.32074, 00:10:44.430 "iops": 14679.649287520633, 00:10:44.430 "mibps": 1834.9561609400791, 00:10:44.430 "io_failed": 1, 00:10:44.430 "io_timeout": 0, 00:10:44.430 "avg_latency_us": 94.74838319391021, 00:10:44.430 "min_latency_us": 22.69344978165939, 00:10:44.430 "max_latency_us": 1459.5353711790392 00:10:44.430 } 00:10:44.430 ], 00:10:44.430 "core_count": 1 00:10:44.430 } 00:10:44.430 16:24:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.430 16:24:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65713 00:10:44.430 16:24:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 65713 ']' 00:10:44.430 16:24:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 65713 00:10:44.430 16:24:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:10:44.430 16:24:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:44.430 16:24:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65713 00:10:44.430 killing process with pid 65713 00:10:44.430 16:24:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:44.430 16:24:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:44.430 16:24:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65713' 00:10:44.430 16:24:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 65713 00:10:44.430 [2024-11-05 16:24:57.467336] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:44.430 16:24:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 65713 00:10:44.688 [2024-11-05 16:24:57.711232] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:46.067 16:24:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.YXjnhYszjG 00:10:46.067 16:24:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:46.067 16:24:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:46.067 16:24:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:10:46.067 16:24:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:46.067 16:24:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:46.067 16:24:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:46.067 16:24:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:10:46.067 00:10:46.067 real 0m4.690s 00:10:46.067 user 0m5.596s 00:10:46.067 sys 0m0.563s 00:10:46.067 ************************************ 00:10:46.067 END TEST raid_write_error_test 00:10:46.067 ************************************ 00:10:46.067 16:24:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:46.067 16:24:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.067 16:24:59 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:46.067 16:24:59 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:10:46.067 16:24:59 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:46.067 16:24:59 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:46.067 16:24:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:46.067 ************************************ 00:10:46.067 START TEST raid_state_function_test 00:10:46.068 ************************************ 00:10:46.068 16:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 3 false 00:10:46.068 16:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:46.068 16:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:46.068 16:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:46.068 16:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:46.068 16:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:46.068 16:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:46.068 16:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:46.068 16:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:46.068 16:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:46.068 16:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:46.068 16:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:46.068 16:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:46.068 16:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:46.068 16:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:46.068 16:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:46.068 16:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:46.068 16:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:46.068 16:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:46.068 16:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:46.068 16:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:46.068 16:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:46.068 16:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:46.068 16:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:46.068 16:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:46.068 16:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:46.068 16:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:46.068 16:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65857 00:10:46.068 16:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:46.068 16:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65857' 00:10:46.068 Process raid pid: 65857 00:10:46.068 16:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65857 00:10:46.068 16:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 65857 ']' 00:10:46.068 16:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.068 16:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:46.068 16:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.068 16:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:46.068 16:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.068 [2024-11-05 16:24:59.132895] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:10:46.068 [2024-11-05 16:24:59.133022] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:46.327 [2024-11-05 16:24:59.311653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.586 [2024-11-05 16:24:59.438897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.586 [2024-11-05 16:24:59.663748] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:46.587 [2024-11-05 16:24:59.663797] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:47.154 16:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:47.154 16:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:10:47.154 16:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:47.154 16:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.154 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.154 [2024-11-05 16:25:00.007724] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:47.154 [2024-11-05 16:25:00.007787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:47.154 [2024-11-05 16:25:00.007800] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:47.154 [2024-11-05 16:25:00.007811] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:47.154 [2024-11-05 16:25:00.007818] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:47.154 [2024-11-05 16:25:00.007828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:47.154 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.154 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:47.154 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.154 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.154 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:47.154 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.154 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:47.154 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.154 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.154 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.154 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.154 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.154 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.154 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.154 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.154 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.154 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.154 "name": "Existed_Raid", 00:10:47.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.154 "strip_size_kb": 64, 00:10:47.154 "state": "configuring", 00:10:47.154 "raid_level": "concat", 00:10:47.154 "superblock": false, 00:10:47.154 "num_base_bdevs": 3, 00:10:47.154 "num_base_bdevs_discovered": 0, 00:10:47.154 "num_base_bdevs_operational": 3, 00:10:47.154 "base_bdevs_list": [ 00:10:47.154 { 00:10:47.154 "name": "BaseBdev1", 00:10:47.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.154 "is_configured": false, 00:10:47.154 "data_offset": 0, 00:10:47.154 "data_size": 0 00:10:47.154 }, 00:10:47.154 { 00:10:47.154 "name": "BaseBdev2", 00:10:47.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.154 "is_configured": false, 00:10:47.154 "data_offset": 0, 00:10:47.154 "data_size": 0 00:10:47.154 }, 00:10:47.154 { 00:10:47.154 "name": "BaseBdev3", 00:10:47.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.154 "is_configured": false, 00:10:47.154 "data_offset": 0, 00:10:47.154 "data_size": 0 00:10:47.154 } 00:10:47.154 ] 00:10:47.154 }' 00:10:47.154 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.154 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.413 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:47.413 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.413 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.413 [2024-11-05 16:25:00.478848] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:47.413 [2024-11-05 16:25:00.478950] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:47.413 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.413 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:47.413 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.413 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.413 [2024-11-05 16:25:00.494825] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:47.413 [2024-11-05 16:25:00.494914] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:47.413 [2024-11-05 16:25:00.494961] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:47.413 [2024-11-05 16:25:00.494988] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:47.413 [2024-11-05 16:25:00.495009] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:47.413 [2024-11-05 16:25:00.495033] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:47.413 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.413 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:47.413 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.413 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.672 [2024-11-05 16:25:00.543395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:47.672 BaseBdev1 00:10:47.672 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.672 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:47.672 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:47.672 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:47.672 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:47.672 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:47.672 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:47.672 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:47.672 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.672 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.672 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.672 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:47.672 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.672 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.673 [ 00:10:47.673 { 00:10:47.673 "name": "BaseBdev1", 00:10:47.673 "aliases": [ 00:10:47.673 "efa90289-f664-4a31-8467-d969927d8d99" 00:10:47.673 ], 00:10:47.673 "product_name": "Malloc disk", 00:10:47.673 "block_size": 512, 00:10:47.673 "num_blocks": 65536, 00:10:47.673 "uuid": "efa90289-f664-4a31-8467-d969927d8d99", 00:10:47.673 "assigned_rate_limits": { 00:10:47.673 "rw_ios_per_sec": 0, 00:10:47.673 "rw_mbytes_per_sec": 0, 00:10:47.673 "r_mbytes_per_sec": 0, 00:10:47.673 "w_mbytes_per_sec": 0 00:10:47.673 }, 00:10:47.673 "claimed": true, 00:10:47.673 "claim_type": "exclusive_write", 00:10:47.673 "zoned": false, 00:10:47.673 "supported_io_types": { 00:10:47.673 "read": true, 00:10:47.673 "write": true, 00:10:47.673 "unmap": true, 00:10:47.673 "flush": true, 00:10:47.673 "reset": true, 00:10:47.673 "nvme_admin": false, 00:10:47.673 "nvme_io": false, 00:10:47.673 "nvme_io_md": false, 00:10:47.673 "write_zeroes": true, 00:10:47.673 "zcopy": true, 00:10:47.673 "get_zone_info": false, 00:10:47.673 "zone_management": false, 00:10:47.673 "zone_append": false, 00:10:47.673 "compare": false, 00:10:47.673 "compare_and_write": false, 00:10:47.673 "abort": true, 00:10:47.673 "seek_hole": false, 00:10:47.673 "seek_data": false, 00:10:47.673 "copy": true, 00:10:47.673 "nvme_iov_md": false 00:10:47.673 }, 00:10:47.673 "memory_domains": [ 00:10:47.673 { 00:10:47.673 "dma_device_id": "system", 00:10:47.673 "dma_device_type": 1 00:10:47.673 }, 00:10:47.673 { 00:10:47.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.673 "dma_device_type": 2 00:10:47.673 } 00:10:47.673 ], 00:10:47.673 "driver_specific": {} 00:10:47.673 } 00:10:47.673 ] 00:10:47.673 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.673 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:47.673 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:47.673 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.673 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.673 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:47.673 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.673 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:47.673 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.673 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.673 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.673 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.673 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.673 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.673 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.673 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.673 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.673 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.673 "name": "Existed_Raid", 00:10:47.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.673 "strip_size_kb": 64, 00:10:47.673 "state": "configuring", 00:10:47.673 "raid_level": "concat", 00:10:47.673 "superblock": false, 00:10:47.673 "num_base_bdevs": 3, 00:10:47.673 "num_base_bdevs_discovered": 1, 00:10:47.673 "num_base_bdevs_operational": 3, 00:10:47.673 "base_bdevs_list": [ 00:10:47.673 { 00:10:47.673 "name": "BaseBdev1", 00:10:47.673 "uuid": "efa90289-f664-4a31-8467-d969927d8d99", 00:10:47.673 "is_configured": true, 00:10:47.673 "data_offset": 0, 00:10:47.673 "data_size": 65536 00:10:47.673 }, 00:10:47.673 { 00:10:47.673 "name": "BaseBdev2", 00:10:47.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.673 "is_configured": false, 00:10:47.673 "data_offset": 0, 00:10:47.673 "data_size": 0 00:10:47.673 }, 00:10:47.673 { 00:10:47.673 "name": "BaseBdev3", 00:10:47.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.673 "is_configured": false, 00:10:47.673 "data_offset": 0, 00:10:47.673 "data_size": 0 00:10:47.673 } 00:10:47.673 ] 00:10:47.673 }' 00:10:47.673 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.673 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.933 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:47.933 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.933 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.933 [2024-11-05 16:25:00.958741] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:47.933 [2024-11-05 16:25:00.958806] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:47.933 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.933 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:47.933 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.933 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.933 [2024-11-05 16:25:00.970804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:47.933 [2024-11-05 16:25:00.973009] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:47.933 [2024-11-05 16:25:00.973062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:47.933 [2024-11-05 16:25:00.973075] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:47.933 [2024-11-05 16:25:00.973086] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:47.933 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.933 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:47.933 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:47.933 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:47.933 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.933 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.933 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:47.933 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.933 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:47.933 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.933 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.933 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.933 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.933 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.933 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.933 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.933 16:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.933 16:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.933 16:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.933 "name": "Existed_Raid", 00:10:47.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.933 "strip_size_kb": 64, 00:10:47.933 "state": "configuring", 00:10:47.933 "raid_level": "concat", 00:10:47.933 "superblock": false, 00:10:47.933 "num_base_bdevs": 3, 00:10:47.933 "num_base_bdevs_discovered": 1, 00:10:47.933 "num_base_bdevs_operational": 3, 00:10:47.933 "base_bdevs_list": [ 00:10:47.933 { 00:10:47.933 "name": "BaseBdev1", 00:10:47.933 "uuid": "efa90289-f664-4a31-8467-d969927d8d99", 00:10:47.933 "is_configured": true, 00:10:47.933 "data_offset": 0, 00:10:47.933 "data_size": 65536 00:10:47.933 }, 00:10:47.933 { 00:10:47.933 "name": "BaseBdev2", 00:10:47.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.933 "is_configured": false, 00:10:47.933 "data_offset": 0, 00:10:47.933 "data_size": 0 00:10:47.933 }, 00:10:47.933 { 00:10:47.933 "name": "BaseBdev3", 00:10:47.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.933 "is_configured": false, 00:10:47.933 "data_offset": 0, 00:10:47.933 "data_size": 0 00:10:47.933 } 00:10:47.933 ] 00:10:47.933 }' 00:10:47.933 16:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.933 16:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.501 16:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:48.501 16:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.501 16:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.501 [2024-11-05 16:25:01.437631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:48.501 BaseBdev2 00:10:48.501 16:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.501 16:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:48.501 16:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:48.501 16:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:48.501 16:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:48.501 16:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:48.501 16:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:48.501 16:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:48.501 16:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.501 16:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.501 16:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.501 16:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:48.502 16:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.502 16:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.502 [ 00:10:48.502 { 00:10:48.502 "name": "BaseBdev2", 00:10:48.502 "aliases": [ 00:10:48.502 "e3948f00-f823-44f0-9e9a-8c8cf395b1a3" 00:10:48.502 ], 00:10:48.502 "product_name": "Malloc disk", 00:10:48.502 "block_size": 512, 00:10:48.502 "num_blocks": 65536, 00:10:48.502 "uuid": "e3948f00-f823-44f0-9e9a-8c8cf395b1a3", 00:10:48.502 "assigned_rate_limits": { 00:10:48.502 "rw_ios_per_sec": 0, 00:10:48.502 "rw_mbytes_per_sec": 0, 00:10:48.502 "r_mbytes_per_sec": 0, 00:10:48.502 "w_mbytes_per_sec": 0 00:10:48.502 }, 00:10:48.502 "claimed": true, 00:10:48.502 "claim_type": "exclusive_write", 00:10:48.502 "zoned": false, 00:10:48.502 "supported_io_types": { 00:10:48.502 "read": true, 00:10:48.502 "write": true, 00:10:48.502 "unmap": true, 00:10:48.502 "flush": true, 00:10:48.502 "reset": true, 00:10:48.502 "nvme_admin": false, 00:10:48.502 "nvme_io": false, 00:10:48.502 "nvme_io_md": false, 00:10:48.502 "write_zeroes": true, 00:10:48.502 "zcopy": true, 00:10:48.502 "get_zone_info": false, 00:10:48.502 "zone_management": false, 00:10:48.502 "zone_append": false, 00:10:48.502 "compare": false, 00:10:48.502 "compare_and_write": false, 00:10:48.502 "abort": true, 00:10:48.502 "seek_hole": false, 00:10:48.502 "seek_data": false, 00:10:48.502 "copy": true, 00:10:48.502 "nvme_iov_md": false 00:10:48.502 }, 00:10:48.502 "memory_domains": [ 00:10:48.502 { 00:10:48.502 "dma_device_id": "system", 00:10:48.502 "dma_device_type": 1 00:10:48.502 }, 00:10:48.502 { 00:10:48.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.502 "dma_device_type": 2 00:10:48.502 } 00:10:48.502 ], 00:10:48.502 "driver_specific": {} 00:10:48.502 } 00:10:48.502 ] 00:10:48.502 16:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.502 16:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:48.502 16:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:48.502 16:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:48.502 16:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:48.502 16:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.502 16:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.502 16:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:48.502 16:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.502 16:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:48.502 16:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.502 16:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.502 16:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.502 16:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.502 16:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.502 16:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.502 16:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.502 16:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.502 16:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.502 16:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.502 "name": "Existed_Raid", 00:10:48.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.502 "strip_size_kb": 64, 00:10:48.502 "state": "configuring", 00:10:48.502 "raid_level": "concat", 00:10:48.502 "superblock": false, 00:10:48.502 "num_base_bdevs": 3, 00:10:48.502 "num_base_bdevs_discovered": 2, 00:10:48.502 "num_base_bdevs_operational": 3, 00:10:48.502 "base_bdevs_list": [ 00:10:48.502 { 00:10:48.502 "name": "BaseBdev1", 00:10:48.502 "uuid": "efa90289-f664-4a31-8467-d969927d8d99", 00:10:48.502 "is_configured": true, 00:10:48.502 "data_offset": 0, 00:10:48.502 "data_size": 65536 00:10:48.502 }, 00:10:48.502 { 00:10:48.502 "name": "BaseBdev2", 00:10:48.502 "uuid": "e3948f00-f823-44f0-9e9a-8c8cf395b1a3", 00:10:48.502 "is_configured": true, 00:10:48.502 "data_offset": 0, 00:10:48.502 "data_size": 65536 00:10:48.502 }, 00:10:48.502 { 00:10:48.502 "name": "BaseBdev3", 00:10:48.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.502 "is_configured": false, 00:10:48.502 "data_offset": 0, 00:10:48.502 "data_size": 0 00:10:48.502 } 00:10:48.502 ] 00:10:48.502 }' 00:10:48.502 16:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.502 16:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.072 16:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:49.072 16:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.072 16:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.072 [2024-11-05 16:25:01.982742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:49.072 [2024-11-05 16:25:01.982867] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:49.072 [2024-11-05 16:25:01.982897] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:49.072 [2024-11-05 16:25:01.983265] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:49.072 [2024-11-05 16:25:01.983499] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:49.072 [2024-11-05 16:25:01.983565] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:49.072 [2024-11-05 16:25:01.983897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:49.072 BaseBdev3 00:10:49.072 16:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.072 16:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:49.072 16:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:49.072 16:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:49.072 16:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:49.072 16:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:49.072 16:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:49.072 16:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:49.072 16:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.072 16:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.072 16:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.072 16:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:49.072 16:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.072 16:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.072 [ 00:10:49.072 { 00:10:49.072 "name": "BaseBdev3", 00:10:49.072 "aliases": [ 00:10:49.072 "8d6e3855-ef3a-4147-8508-fdd66321439b" 00:10:49.072 ], 00:10:49.072 "product_name": "Malloc disk", 00:10:49.072 "block_size": 512, 00:10:49.072 "num_blocks": 65536, 00:10:49.072 "uuid": "8d6e3855-ef3a-4147-8508-fdd66321439b", 00:10:49.072 "assigned_rate_limits": { 00:10:49.072 "rw_ios_per_sec": 0, 00:10:49.072 "rw_mbytes_per_sec": 0, 00:10:49.072 "r_mbytes_per_sec": 0, 00:10:49.072 "w_mbytes_per_sec": 0 00:10:49.072 }, 00:10:49.072 "claimed": true, 00:10:49.072 "claim_type": "exclusive_write", 00:10:49.072 "zoned": false, 00:10:49.072 "supported_io_types": { 00:10:49.072 "read": true, 00:10:49.072 "write": true, 00:10:49.072 "unmap": true, 00:10:49.072 "flush": true, 00:10:49.072 "reset": true, 00:10:49.072 "nvme_admin": false, 00:10:49.072 "nvme_io": false, 00:10:49.072 "nvme_io_md": false, 00:10:49.072 "write_zeroes": true, 00:10:49.072 "zcopy": true, 00:10:49.072 "get_zone_info": false, 00:10:49.072 "zone_management": false, 00:10:49.072 "zone_append": false, 00:10:49.072 "compare": false, 00:10:49.072 "compare_and_write": false, 00:10:49.072 "abort": true, 00:10:49.072 "seek_hole": false, 00:10:49.072 "seek_data": false, 00:10:49.072 "copy": true, 00:10:49.072 "nvme_iov_md": false 00:10:49.072 }, 00:10:49.072 "memory_domains": [ 00:10:49.072 { 00:10:49.072 "dma_device_id": "system", 00:10:49.072 "dma_device_type": 1 00:10:49.072 }, 00:10:49.072 { 00:10:49.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.072 "dma_device_type": 2 00:10:49.072 } 00:10:49.072 ], 00:10:49.072 "driver_specific": {} 00:10:49.072 } 00:10:49.072 ] 00:10:49.072 16:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.072 16:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:49.072 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:49.072 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:49.072 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:49.072 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.072 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:49.072 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:49.072 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.072 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:49.072 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.072 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.072 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.072 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.072 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.072 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.072 16:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.072 16:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.072 16:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.072 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.072 "name": "Existed_Raid", 00:10:49.072 "uuid": "3455ba62-11fd-428d-b86a-8076fa66c629", 00:10:49.072 "strip_size_kb": 64, 00:10:49.072 "state": "online", 00:10:49.072 "raid_level": "concat", 00:10:49.072 "superblock": false, 00:10:49.072 "num_base_bdevs": 3, 00:10:49.072 "num_base_bdevs_discovered": 3, 00:10:49.072 "num_base_bdevs_operational": 3, 00:10:49.072 "base_bdevs_list": [ 00:10:49.072 { 00:10:49.072 "name": "BaseBdev1", 00:10:49.072 "uuid": "efa90289-f664-4a31-8467-d969927d8d99", 00:10:49.072 "is_configured": true, 00:10:49.072 "data_offset": 0, 00:10:49.072 "data_size": 65536 00:10:49.072 }, 00:10:49.072 { 00:10:49.072 "name": "BaseBdev2", 00:10:49.072 "uuid": "e3948f00-f823-44f0-9e9a-8c8cf395b1a3", 00:10:49.072 "is_configured": true, 00:10:49.072 "data_offset": 0, 00:10:49.072 "data_size": 65536 00:10:49.072 }, 00:10:49.072 { 00:10:49.072 "name": "BaseBdev3", 00:10:49.072 "uuid": "8d6e3855-ef3a-4147-8508-fdd66321439b", 00:10:49.072 "is_configured": true, 00:10:49.072 "data_offset": 0, 00:10:49.072 "data_size": 65536 00:10:49.072 } 00:10:49.072 ] 00:10:49.072 }' 00:10:49.072 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.072 16:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.642 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:49.642 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:49.642 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:49.642 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:49.642 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:49.642 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:49.642 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:49.642 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:49.642 16:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.642 16:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.642 [2024-11-05 16:25:02.482423] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:49.642 16:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.642 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:49.642 "name": "Existed_Raid", 00:10:49.642 "aliases": [ 00:10:49.642 "3455ba62-11fd-428d-b86a-8076fa66c629" 00:10:49.642 ], 00:10:49.642 "product_name": "Raid Volume", 00:10:49.642 "block_size": 512, 00:10:49.642 "num_blocks": 196608, 00:10:49.642 "uuid": "3455ba62-11fd-428d-b86a-8076fa66c629", 00:10:49.642 "assigned_rate_limits": { 00:10:49.642 "rw_ios_per_sec": 0, 00:10:49.642 "rw_mbytes_per_sec": 0, 00:10:49.642 "r_mbytes_per_sec": 0, 00:10:49.642 "w_mbytes_per_sec": 0 00:10:49.642 }, 00:10:49.642 "claimed": false, 00:10:49.642 "zoned": false, 00:10:49.642 "supported_io_types": { 00:10:49.642 "read": true, 00:10:49.642 "write": true, 00:10:49.642 "unmap": true, 00:10:49.642 "flush": true, 00:10:49.642 "reset": true, 00:10:49.642 "nvme_admin": false, 00:10:49.642 "nvme_io": false, 00:10:49.642 "nvme_io_md": false, 00:10:49.642 "write_zeroes": true, 00:10:49.642 "zcopy": false, 00:10:49.642 "get_zone_info": false, 00:10:49.642 "zone_management": false, 00:10:49.642 "zone_append": false, 00:10:49.642 "compare": false, 00:10:49.642 "compare_and_write": false, 00:10:49.642 "abort": false, 00:10:49.642 "seek_hole": false, 00:10:49.642 "seek_data": false, 00:10:49.642 "copy": false, 00:10:49.642 "nvme_iov_md": false 00:10:49.642 }, 00:10:49.642 "memory_domains": [ 00:10:49.642 { 00:10:49.642 "dma_device_id": "system", 00:10:49.642 "dma_device_type": 1 00:10:49.642 }, 00:10:49.642 { 00:10:49.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.642 "dma_device_type": 2 00:10:49.642 }, 00:10:49.642 { 00:10:49.642 "dma_device_id": "system", 00:10:49.642 "dma_device_type": 1 00:10:49.642 }, 00:10:49.642 { 00:10:49.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.642 "dma_device_type": 2 00:10:49.642 }, 00:10:49.642 { 00:10:49.642 "dma_device_id": "system", 00:10:49.642 "dma_device_type": 1 00:10:49.642 }, 00:10:49.642 { 00:10:49.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.642 "dma_device_type": 2 00:10:49.642 } 00:10:49.642 ], 00:10:49.642 "driver_specific": { 00:10:49.642 "raid": { 00:10:49.642 "uuid": "3455ba62-11fd-428d-b86a-8076fa66c629", 00:10:49.642 "strip_size_kb": 64, 00:10:49.642 "state": "online", 00:10:49.642 "raid_level": "concat", 00:10:49.642 "superblock": false, 00:10:49.642 "num_base_bdevs": 3, 00:10:49.642 "num_base_bdevs_discovered": 3, 00:10:49.642 "num_base_bdevs_operational": 3, 00:10:49.642 "base_bdevs_list": [ 00:10:49.642 { 00:10:49.642 "name": "BaseBdev1", 00:10:49.642 "uuid": "efa90289-f664-4a31-8467-d969927d8d99", 00:10:49.642 "is_configured": true, 00:10:49.642 "data_offset": 0, 00:10:49.642 "data_size": 65536 00:10:49.642 }, 00:10:49.642 { 00:10:49.642 "name": "BaseBdev2", 00:10:49.642 "uuid": "e3948f00-f823-44f0-9e9a-8c8cf395b1a3", 00:10:49.642 "is_configured": true, 00:10:49.642 "data_offset": 0, 00:10:49.642 "data_size": 65536 00:10:49.642 }, 00:10:49.642 { 00:10:49.642 "name": "BaseBdev3", 00:10:49.642 "uuid": "8d6e3855-ef3a-4147-8508-fdd66321439b", 00:10:49.642 "is_configured": true, 00:10:49.642 "data_offset": 0, 00:10:49.642 "data_size": 65536 00:10:49.642 } 00:10:49.642 ] 00:10:49.642 } 00:10:49.642 } 00:10:49.642 }' 00:10:49.642 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:49.642 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:49.642 BaseBdev2 00:10:49.642 BaseBdev3' 00:10:49.642 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.642 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:49.642 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.642 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:49.642 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.642 16:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.642 16:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.642 16:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.642 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.642 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.642 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.642 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.642 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:49.642 16:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.642 16:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.642 16:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.642 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.642 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.642 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.642 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.643 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:49.643 16:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.643 16:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.643 16:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.902 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.902 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.902 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:49.902 16:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.902 16:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.902 [2024-11-05 16:25:02.757760] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:49.902 [2024-11-05 16:25:02.757898] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:49.902 [2024-11-05 16:25:02.757992] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:49.902 16:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.902 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:49.902 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:49.902 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:49.902 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:49.902 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:49.902 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:49.902 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.902 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:49.902 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:49.902 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.902 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:49.902 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.902 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.902 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.902 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.902 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.902 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.902 16:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.902 16:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.902 16:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.902 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.902 "name": "Existed_Raid", 00:10:49.902 "uuid": "3455ba62-11fd-428d-b86a-8076fa66c629", 00:10:49.902 "strip_size_kb": 64, 00:10:49.902 "state": "offline", 00:10:49.902 "raid_level": "concat", 00:10:49.902 "superblock": false, 00:10:49.902 "num_base_bdevs": 3, 00:10:49.902 "num_base_bdevs_discovered": 2, 00:10:49.902 "num_base_bdevs_operational": 2, 00:10:49.902 "base_bdevs_list": [ 00:10:49.902 { 00:10:49.902 "name": null, 00:10:49.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.902 "is_configured": false, 00:10:49.902 "data_offset": 0, 00:10:49.902 "data_size": 65536 00:10:49.902 }, 00:10:49.902 { 00:10:49.902 "name": "BaseBdev2", 00:10:49.902 "uuid": "e3948f00-f823-44f0-9e9a-8c8cf395b1a3", 00:10:49.902 "is_configured": true, 00:10:49.902 "data_offset": 0, 00:10:49.902 "data_size": 65536 00:10:49.902 }, 00:10:49.902 { 00:10:49.902 "name": "BaseBdev3", 00:10:49.902 "uuid": "8d6e3855-ef3a-4147-8508-fdd66321439b", 00:10:49.902 "is_configured": true, 00:10:49.902 "data_offset": 0, 00:10:49.902 "data_size": 65536 00:10:49.902 } 00:10:49.902 ] 00:10:49.902 }' 00:10:49.902 16:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.902 16:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.469 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:50.469 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:50.469 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:50.469 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.469 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.469 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.469 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.469 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:50.469 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:50.469 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:50.469 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.469 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.469 [2024-11-05 16:25:03.394899] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:50.469 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.469 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:50.469 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:50.469 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.469 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.469 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:50.469 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.469 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.469 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:50.469 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:50.469 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:50.469 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.469 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.728 [2024-11-05 16:25:03.560589] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:50.728 [2024-11-05 16:25:03.560765] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:50.728 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.728 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:50.728 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:50.728 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.728 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:50.728 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.728 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.728 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.728 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:50.728 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:50.728 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:50.728 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:50.728 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:50.728 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:50.728 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.728 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.728 BaseBdev2 00:10:50.728 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.728 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:50.728 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:50.728 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:50.728 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:50.729 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:50.729 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:50.729 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:50.729 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.729 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.729 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.729 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:50.729 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.729 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.729 [ 00:10:50.729 { 00:10:50.729 "name": "BaseBdev2", 00:10:50.729 "aliases": [ 00:10:50.729 "c69d6eba-9c82-490b-a461-c921d90516e0" 00:10:50.729 ], 00:10:50.729 "product_name": "Malloc disk", 00:10:50.729 "block_size": 512, 00:10:50.729 "num_blocks": 65536, 00:10:50.729 "uuid": "c69d6eba-9c82-490b-a461-c921d90516e0", 00:10:50.729 "assigned_rate_limits": { 00:10:50.729 "rw_ios_per_sec": 0, 00:10:50.729 "rw_mbytes_per_sec": 0, 00:10:50.729 "r_mbytes_per_sec": 0, 00:10:50.729 "w_mbytes_per_sec": 0 00:10:50.729 }, 00:10:50.729 "claimed": false, 00:10:50.729 "zoned": false, 00:10:50.729 "supported_io_types": { 00:10:50.729 "read": true, 00:10:50.729 "write": true, 00:10:50.729 "unmap": true, 00:10:50.729 "flush": true, 00:10:50.729 "reset": true, 00:10:50.729 "nvme_admin": false, 00:10:50.729 "nvme_io": false, 00:10:50.729 "nvme_io_md": false, 00:10:50.729 "write_zeroes": true, 00:10:50.729 "zcopy": true, 00:10:50.729 "get_zone_info": false, 00:10:50.729 "zone_management": false, 00:10:50.729 "zone_append": false, 00:10:50.729 "compare": false, 00:10:50.729 "compare_and_write": false, 00:10:50.729 "abort": true, 00:10:50.729 "seek_hole": false, 00:10:50.729 "seek_data": false, 00:10:50.729 "copy": true, 00:10:50.729 "nvme_iov_md": false 00:10:50.729 }, 00:10:50.729 "memory_domains": [ 00:10:50.729 { 00:10:50.729 "dma_device_id": "system", 00:10:50.729 "dma_device_type": 1 00:10:50.729 }, 00:10:50.729 { 00:10:50.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.729 "dma_device_type": 2 00:10:50.729 } 00:10:50.729 ], 00:10:50.729 "driver_specific": {} 00:10:50.729 } 00:10:50.729 ] 00:10:50.729 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.729 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:50.729 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:50.729 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:50.729 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:50.729 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.729 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.988 BaseBdev3 00:10:50.988 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.988 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:50.988 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:50.988 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:50.988 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:50.988 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:50.988 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:50.988 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:50.988 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.988 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.988 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.988 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:50.988 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.988 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.988 [ 00:10:50.988 { 00:10:50.988 "name": "BaseBdev3", 00:10:50.988 "aliases": [ 00:10:50.988 "eaae4bad-b734-45a9-97b8-4a720d820207" 00:10:50.988 ], 00:10:50.988 "product_name": "Malloc disk", 00:10:50.988 "block_size": 512, 00:10:50.988 "num_blocks": 65536, 00:10:50.988 "uuid": "eaae4bad-b734-45a9-97b8-4a720d820207", 00:10:50.988 "assigned_rate_limits": { 00:10:50.988 "rw_ios_per_sec": 0, 00:10:50.988 "rw_mbytes_per_sec": 0, 00:10:50.988 "r_mbytes_per_sec": 0, 00:10:50.988 "w_mbytes_per_sec": 0 00:10:50.988 }, 00:10:50.988 "claimed": false, 00:10:50.988 "zoned": false, 00:10:50.988 "supported_io_types": { 00:10:50.988 "read": true, 00:10:50.988 "write": true, 00:10:50.988 "unmap": true, 00:10:50.988 "flush": true, 00:10:50.988 "reset": true, 00:10:50.988 "nvme_admin": false, 00:10:50.988 "nvme_io": false, 00:10:50.988 "nvme_io_md": false, 00:10:50.988 "write_zeroes": true, 00:10:50.988 "zcopy": true, 00:10:50.988 "get_zone_info": false, 00:10:50.988 "zone_management": false, 00:10:50.988 "zone_append": false, 00:10:50.988 "compare": false, 00:10:50.988 "compare_and_write": false, 00:10:50.988 "abort": true, 00:10:50.988 "seek_hole": false, 00:10:50.988 "seek_data": false, 00:10:50.988 "copy": true, 00:10:50.988 "nvme_iov_md": false 00:10:50.988 }, 00:10:50.988 "memory_domains": [ 00:10:50.988 { 00:10:50.988 "dma_device_id": "system", 00:10:50.988 "dma_device_type": 1 00:10:50.988 }, 00:10:50.988 { 00:10:50.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.988 "dma_device_type": 2 00:10:50.988 } 00:10:50.988 ], 00:10:50.988 "driver_specific": {} 00:10:50.988 } 00:10:50.988 ] 00:10:50.988 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.988 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:50.988 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:50.988 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:50.988 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:50.988 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.988 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.988 [2024-11-05 16:25:03.872171] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:50.988 [2024-11-05 16:25:03.872327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:50.988 [2024-11-05 16:25:03.872384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:50.988 [2024-11-05 16:25:03.874948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:50.988 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.988 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:50.988 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.988 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.988 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:50.988 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.988 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:50.988 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.988 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.988 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.988 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.988 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.988 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.988 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.988 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.988 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.988 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.988 "name": "Existed_Raid", 00:10:50.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.988 "strip_size_kb": 64, 00:10:50.988 "state": "configuring", 00:10:50.988 "raid_level": "concat", 00:10:50.988 "superblock": false, 00:10:50.988 "num_base_bdevs": 3, 00:10:50.988 "num_base_bdevs_discovered": 2, 00:10:50.988 "num_base_bdevs_operational": 3, 00:10:50.988 "base_bdevs_list": [ 00:10:50.988 { 00:10:50.988 "name": "BaseBdev1", 00:10:50.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.988 "is_configured": false, 00:10:50.988 "data_offset": 0, 00:10:50.988 "data_size": 0 00:10:50.988 }, 00:10:50.988 { 00:10:50.988 "name": "BaseBdev2", 00:10:50.988 "uuid": "c69d6eba-9c82-490b-a461-c921d90516e0", 00:10:50.988 "is_configured": true, 00:10:50.988 "data_offset": 0, 00:10:50.988 "data_size": 65536 00:10:50.988 }, 00:10:50.988 { 00:10:50.988 "name": "BaseBdev3", 00:10:50.988 "uuid": "eaae4bad-b734-45a9-97b8-4a720d820207", 00:10:50.988 "is_configured": true, 00:10:50.988 "data_offset": 0, 00:10:50.988 "data_size": 65536 00:10:50.988 } 00:10:50.988 ] 00:10:50.988 }' 00:10:50.988 16:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.988 16:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.556 16:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:51.556 16:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.556 16:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.556 [2024-11-05 16:25:04.347463] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:51.556 16:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.556 16:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:51.556 16:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.556 16:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.556 16:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:51.556 16:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.556 16:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:51.556 16:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.556 16:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.556 16:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.556 16:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.556 16:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.556 16:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.556 16:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.556 16:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.556 16:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.556 16:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.556 "name": "Existed_Raid", 00:10:51.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.556 "strip_size_kb": 64, 00:10:51.556 "state": "configuring", 00:10:51.556 "raid_level": "concat", 00:10:51.556 "superblock": false, 00:10:51.556 "num_base_bdevs": 3, 00:10:51.556 "num_base_bdevs_discovered": 1, 00:10:51.556 "num_base_bdevs_operational": 3, 00:10:51.556 "base_bdevs_list": [ 00:10:51.556 { 00:10:51.556 "name": "BaseBdev1", 00:10:51.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.556 "is_configured": false, 00:10:51.556 "data_offset": 0, 00:10:51.556 "data_size": 0 00:10:51.556 }, 00:10:51.556 { 00:10:51.556 "name": null, 00:10:51.556 "uuid": "c69d6eba-9c82-490b-a461-c921d90516e0", 00:10:51.556 "is_configured": false, 00:10:51.556 "data_offset": 0, 00:10:51.556 "data_size": 65536 00:10:51.556 }, 00:10:51.556 { 00:10:51.556 "name": "BaseBdev3", 00:10:51.556 "uuid": "eaae4bad-b734-45a9-97b8-4a720d820207", 00:10:51.556 "is_configured": true, 00:10:51.556 "data_offset": 0, 00:10:51.556 "data_size": 65536 00:10:51.556 } 00:10:51.556 ] 00:10:51.556 }' 00:10:51.556 16:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.556 16:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.814 16:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:51.814 16:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.814 16:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.814 16:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.814 16:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.814 16:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:51.814 16:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:51.814 16:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.814 16:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.814 [2024-11-05 16:25:04.892783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:51.814 BaseBdev1 00:10:51.814 16:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.814 16:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:51.814 16:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:51.814 16:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:51.814 16:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:51.814 16:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:51.814 16:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:51.814 16:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:51.814 16:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.814 16:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.073 16:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.073 16:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:52.073 16:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.073 16:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.073 [ 00:10:52.073 { 00:10:52.073 "name": "BaseBdev1", 00:10:52.073 "aliases": [ 00:10:52.073 "33fadb2c-4668-497d-80c9-2bcfbb381200" 00:10:52.073 ], 00:10:52.073 "product_name": "Malloc disk", 00:10:52.073 "block_size": 512, 00:10:52.073 "num_blocks": 65536, 00:10:52.073 "uuid": "33fadb2c-4668-497d-80c9-2bcfbb381200", 00:10:52.073 "assigned_rate_limits": { 00:10:52.073 "rw_ios_per_sec": 0, 00:10:52.073 "rw_mbytes_per_sec": 0, 00:10:52.073 "r_mbytes_per_sec": 0, 00:10:52.073 "w_mbytes_per_sec": 0 00:10:52.073 }, 00:10:52.073 "claimed": true, 00:10:52.073 "claim_type": "exclusive_write", 00:10:52.073 "zoned": false, 00:10:52.073 "supported_io_types": { 00:10:52.073 "read": true, 00:10:52.073 "write": true, 00:10:52.073 "unmap": true, 00:10:52.073 "flush": true, 00:10:52.073 "reset": true, 00:10:52.073 "nvme_admin": false, 00:10:52.073 "nvme_io": false, 00:10:52.073 "nvme_io_md": false, 00:10:52.073 "write_zeroes": true, 00:10:52.073 "zcopy": true, 00:10:52.073 "get_zone_info": false, 00:10:52.073 "zone_management": false, 00:10:52.073 "zone_append": false, 00:10:52.073 "compare": false, 00:10:52.073 "compare_and_write": false, 00:10:52.073 "abort": true, 00:10:52.073 "seek_hole": false, 00:10:52.073 "seek_data": false, 00:10:52.073 "copy": true, 00:10:52.073 "nvme_iov_md": false 00:10:52.073 }, 00:10:52.073 "memory_domains": [ 00:10:52.073 { 00:10:52.073 "dma_device_id": "system", 00:10:52.073 "dma_device_type": 1 00:10:52.073 }, 00:10:52.073 { 00:10:52.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.073 "dma_device_type": 2 00:10:52.073 } 00:10:52.073 ], 00:10:52.073 "driver_specific": {} 00:10:52.073 } 00:10:52.073 ] 00:10:52.073 16:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.073 16:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:52.073 16:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:52.073 16:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.073 16:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.073 16:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:52.073 16:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.073 16:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:52.073 16:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.073 16:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.073 16:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.073 16:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.073 16:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.073 16:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.073 16:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.073 16:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.073 16:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.073 16:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.073 "name": "Existed_Raid", 00:10:52.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.073 "strip_size_kb": 64, 00:10:52.073 "state": "configuring", 00:10:52.073 "raid_level": "concat", 00:10:52.073 "superblock": false, 00:10:52.073 "num_base_bdevs": 3, 00:10:52.073 "num_base_bdevs_discovered": 2, 00:10:52.073 "num_base_bdevs_operational": 3, 00:10:52.073 "base_bdevs_list": [ 00:10:52.073 { 00:10:52.073 "name": "BaseBdev1", 00:10:52.073 "uuid": "33fadb2c-4668-497d-80c9-2bcfbb381200", 00:10:52.073 "is_configured": true, 00:10:52.073 "data_offset": 0, 00:10:52.073 "data_size": 65536 00:10:52.073 }, 00:10:52.073 { 00:10:52.073 "name": null, 00:10:52.073 "uuid": "c69d6eba-9c82-490b-a461-c921d90516e0", 00:10:52.073 "is_configured": false, 00:10:52.073 "data_offset": 0, 00:10:52.073 "data_size": 65536 00:10:52.073 }, 00:10:52.073 { 00:10:52.073 "name": "BaseBdev3", 00:10:52.073 "uuid": "eaae4bad-b734-45a9-97b8-4a720d820207", 00:10:52.073 "is_configured": true, 00:10:52.073 "data_offset": 0, 00:10:52.073 "data_size": 65536 00:10:52.073 } 00:10:52.073 ] 00:10:52.073 }' 00:10:52.073 16:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.073 16:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.332 16:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.332 16:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:52.332 16:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.332 16:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.332 16:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.591 16:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:52.591 16:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:52.591 16:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.591 16:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.591 [2024-11-05 16:25:05.432007] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:52.591 16:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.591 16:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:52.591 16:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.591 16:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.591 16:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:52.591 16:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.591 16:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:52.591 16:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.591 16:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.592 16:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.592 16:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.592 16:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.592 16:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.592 16:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.592 16:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.592 16:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.592 16:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.592 "name": "Existed_Raid", 00:10:52.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.592 "strip_size_kb": 64, 00:10:52.592 "state": "configuring", 00:10:52.592 "raid_level": "concat", 00:10:52.592 "superblock": false, 00:10:52.592 "num_base_bdevs": 3, 00:10:52.592 "num_base_bdevs_discovered": 1, 00:10:52.592 "num_base_bdevs_operational": 3, 00:10:52.592 "base_bdevs_list": [ 00:10:52.592 { 00:10:52.592 "name": "BaseBdev1", 00:10:52.592 "uuid": "33fadb2c-4668-497d-80c9-2bcfbb381200", 00:10:52.592 "is_configured": true, 00:10:52.592 "data_offset": 0, 00:10:52.592 "data_size": 65536 00:10:52.592 }, 00:10:52.592 { 00:10:52.592 "name": null, 00:10:52.592 "uuid": "c69d6eba-9c82-490b-a461-c921d90516e0", 00:10:52.592 "is_configured": false, 00:10:52.592 "data_offset": 0, 00:10:52.592 "data_size": 65536 00:10:52.592 }, 00:10:52.592 { 00:10:52.592 "name": null, 00:10:52.592 "uuid": "eaae4bad-b734-45a9-97b8-4a720d820207", 00:10:52.592 "is_configured": false, 00:10:52.592 "data_offset": 0, 00:10:52.592 "data_size": 65536 00:10:52.592 } 00:10:52.592 ] 00:10:52.592 }' 00:10:52.592 16:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.592 16:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.851 16:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.851 16:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:52.851 16:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.851 16:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.851 16:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.110 16:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:53.110 16:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:53.110 16:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.110 16:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.110 [2024-11-05 16:25:05.971118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:53.110 16:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.110 16:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:53.111 16:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.111 16:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.111 16:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:53.111 16:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.111 16:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:53.111 16:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.111 16:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.111 16:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.111 16:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.111 16:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.111 16:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.111 16:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.111 16:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.111 16:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.111 16:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.111 "name": "Existed_Raid", 00:10:53.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.111 "strip_size_kb": 64, 00:10:53.111 "state": "configuring", 00:10:53.111 "raid_level": "concat", 00:10:53.111 "superblock": false, 00:10:53.111 "num_base_bdevs": 3, 00:10:53.111 "num_base_bdevs_discovered": 2, 00:10:53.111 "num_base_bdevs_operational": 3, 00:10:53.111 "base_bdevs_list": [ 00:10:53.111 { 00:10:53.111 "name": "BaseBdev1", 00:10:53.111 "uuid": "33fadb2c-4668-497d-80c9-2bcfbb381200", 00:10:53.111 "is_configured": true, 00:10:53.111 "data_offset": 0, 00:10:53.111 "data_size": 65536 00:10:53.111 }, 00:10:53.111 { 00:10:53.111 "name": null, 00:10:53.111 "uuid": "c69d6eba-9c82-490b-a461-c921d90516e0", 00:10:53.111 "is_configured": false, 00:10:53.111 "data_offset": 0, 00:10:53.111 "data_size": 65536 00:10:53.111 }, 00:10:53.111 { 00:10:53.111 "name": "BaseBdev3", 00:10:53.111 "uuid": "eaae4bad-b734-45a9-97b8-4a720d820207", 00:10:53.111 "is_configured": true, 00:10:53.111 "data_offset": 0, 00:10:53.111 "data_size": 65536 00:10:53.111 } 00:10:53.111 ] 00:10:53.111 }' 00:10:53.111 16:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.111 16:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.369 16:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.369 16:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.369 16:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:53.369 16:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.369 16:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.628 16:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:53.628 16:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:53.628 16:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.628 16:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.628 [2024-11-05 16:25:06.478275] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:53.628 16:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.628 16:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:53.628 16:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.628 16:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.628 16:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:53.628 16:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.628 16:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:53.628 16:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.628 16:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.628 16:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.628 16:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.628 16:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.628 16:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.628 16:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.628 16:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.628 16:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.628 16:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.628 "name": "Existed_Raid", 00:10:53.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.628 "strip_size_kb": 64, 00:10:53.628 "state": "configuring", 00:10:53.628 "raid_level": "concat", 00:10:53.628 "superblock": false, 00:10:53.628 "num_base_bdevs": 3, 00:10:53.628 "num_base_bdevs_discovered": 1, 00:10:53.628 "num_base_bdevs_operational": 3, 00:10:53.628 "base_bdevs_list": [ 00:10:53.628 { 00:10:53.628 "name": null, 00:10:53.628 "uuid": "33fadb2c-4668-497d-80c9-2bcfbb381200", 00:10:53.628 "is_configured": false, 00:10:53.629 "data_offset": 0, 00:10:53.629 "data_size": 65536 00:10:53.629 }, 00:10:53.629 { 00:10:53.629 "name": null, 00:10:53.629 "uuid": "c69d6eba-9c82-490b-a461-c921d90516e0", 00:10:53.629 "is_configured": false, 00:10:53.629 "data_offset": 0, 00:10:53.629 "data_size": 65536 00:10:53.629 }, 00:10:53.629 { 00:10:53.629 "name": "BaseBdev3", 00:10:53.629 "uuid": "eaae4bad-b734-45a9-97b8-4a720d820207", 00:10:53.629 "is_configured": true, 00:10:53.629 "data_offset": 0, 00:10:53.629 "data_size": 65536 00:10:53.629 } 00:10:53.629 ] 00:10:53.629 }' 00:10:53.629 16:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.629 16:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.198 16:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.198 16:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:54.198 16:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.198 16:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.198 16:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.198 16:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:54.198 16:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:54.198 16:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.198 16:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.198 [2024-11-05 16:25:07.133657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:54.198 16:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.198 16:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:54.198 16:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.198 16:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.198 16:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.198 16:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.198 16:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:54.198 16:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.198 16:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.198 16:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.198 16:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.198 16:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.198 16:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.198 16:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.198 16:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.198 16:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.198 16:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.198 "name": "Existed_Raid", 00:10:54.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.198 "strip_size_kb": 64, 00:10:54.198 "state": "configuring", 00:10:54.198 "raid_level": "concat", 00:10:54.198 "superblock": false, 00:10:54.198 "num_base_bdevs": 3, 00:10:54.198 "num_base_bdevs_discovered": 2, 00:10:54.198 "num_base_bdevs_operational": 3, 00:10:54.198 "base_bdevs_list": [ 00:10:54.198 { 00:10:54.198 "name": null, 00:10:54.198 "uuid": "33fadb2c-4668-497d-80c9-2bcfbb381200", 00:10:54.198 "is_configured": false, 00:10:54.198 "data_offset": 0, 00:10:54.198 "data_size": 65536 00:10:54.198 }, 00:10:54.198 { 00:10:54.198 "name": "BaseBdev2", 00:10:54.198 "uuid": "c69d6eba-9c82-490b-a461-c921d90516e0", 00:10:54.198 "is_configured": true, 00:10:54.198 "data_offset": 0, 00:10:54.198 "data_size": 65536 00:10:54.198 }, 00:10:54.198 { 00:10:54.198 "name": "BaseBdev3", 00:10:54.198 "uuid": "eaae4bad-b734-45a9-97b8-4a720d820207", 00:10:54.198 "is_configured": true, 00:10:54.198 "data_offset": 0, 00:10:54.198 "data_size": 65536 00:10:54.198 } 00:10:54.198 ] 00:10:54.198 }' 00:10:54.198 16:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.198 16:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.457 16:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.457 16:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:54.457 16:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.457 16:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.716 16:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.716 16:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:54.716 16:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.716 16:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:54.716 16:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.716 16:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.716 16:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.716 16:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 33fadb2c-4668-497d-80c9-2bcfbb381200 00:10:54.716 16:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.716 16:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.716 [2024-11-05 16:25:07.655319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:54.716 [2024-11-05 16:25:07.655476] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:54.716 [2024-11-05 16:25:07.655492] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:54.716 [2024-11-05 16:25:07.655824] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:54.716 [2024-11-05 16:25:07.656012] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:54.716 [2024-11-05 16:25:07.656023] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:54.716 [2024-11-05 16:25:07.656325] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:54.716 NewBaseBdev 00:10:54.716 16:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.717 16:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:54.717 16:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:10:54.717 16:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:54.717 16:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:54.717 16:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:54.717 16:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:54.717 16:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:54.717 16:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.717 16:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.717 16:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.717 16:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:54.717 16:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.717 16:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.717 [ 00:10:54.717 { 00:10:54.717 "name": "NewBaseBdev", 00:10:54.717 "aliases": [ 00:10:54.717 "33fadb2c-4668-497d-80c9-2bcfbb381200" 00:10:54.717 ], 00:10:54.717 "product_name": "Malloc disk", 00:10:54.717 "block_size": 512, 00:10:54.717 "num_blocks": 65536, 00:10:54.717 "uuid": "33fadb2c-4668-497d-80c9-2bcfbb381200", 00:10:54.717 "assigned_rate_limits": { 00:10:54.717 "rw_ios_per_sec": 0, 00:10:54.717 "rw_mbytes_per_sec": 0, 00:10:54.717 "r_mbytes_per_sec": 0, 00:10:54.717 "w_mbytes_per_sec": 0 00:10:54.717 }, 00:10:54.717 "claimed": true, 00:10:54.717 "claim_type": "exclusive_write", 00:10:54.717 "zoned": false, 00:10:54.717 "supported_io_types": { 00:10:54.717 "read": true, 00:10:54.717 "write": true, 00:10:54.717 "unmap": true, 00:10:54.717 "flush": true, 00:10:54.717 "reset": true, 00:10:54.717 "nvme_admin": false, 00:10:54.717 "nvme_io": false, 00:10:54.717 "nvme_io_md": false, 00:10:54.717 "write_zeroes": true, 00:10:54.717 "zcopy": true, 00:10:54.717 "get_zone_info": false, 00:10:54.717 "zone_management": false, 00:10:54.717 "zone_append": false, 00:10:54.717 "compare": false, 00:10:54.717 "compare_and_write": false, 00:10:54.717 "abort": true, 00:10:54.717 "seek_hole": false, 00:10:54.717 "seek_data": false, 00:10:54.717 "copy": true, 00:10:54.717 "nvme_iov_md": false 00:10:54.717 }, 00:10:54.717 "memory_domains": [ 00:10:54.717 { 00:10:54.717 "dma_device_id": "system", 00:10:54.717 "dma_device_type": 1 00:10:54.717 }, 00:10:54.717 { 00:10:54.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.717 "dma_device_type": 2 00:10:54.717 } 00:10:54.717 ], 00:10:54.717 "driver_specific": {} 00:10:54.717 } 00:10:54.717 ] 00:10:54.717 16:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.717 16:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:54.717 16:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:54.717 16:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.717 16:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:54.717 16:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.717 16:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.717 16:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:54.717 16:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.717 16:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.717 16:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.717 16:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.717 16:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.717 16:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.717 16:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.717 16:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.717 16:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.717 16:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.717 "name": "Existed_Raid", 00:10:54.717 "uuid": "1b156182-b7ec-46d1-b88d-fa866d10f859", 00:10:54.717 "strip_size_kb": 64, 00:10:54.717 "state": "online", 00:10:54.717 "raid_level": "concat", 00:10:54.717 "superblock": false, 00:10:54.717 "num_base_bdevs": 3, 00:10:54.717 "num_base_bdevs_discovered": 3, 00:10:54.717 "num_base_bdevs_operational": 3, 00:10:54.717 "base_bdevs_list": [ 00:10:54.717 { 00:10:54.717 "name": "NewBaseBdev", 00:10:54.717 "uuid": "33fadb2c-4668-497d-80c9-2bcfbb381200", 00:10:54.717 "is_configured": true, 00:10:54.717 "data_offset": 0, 00:10:54.717 "data_size": 65536 00:10:54.717 }, 00:10:54.717 { 00:10:54.717 "name": "BaseBdev2", 00:10:54.717 "uuid": "c69d6eba-9c82-490b-a461-c921d90516e0", 00:10:54.717 "is_configured": true, 00:10:54.717 "data_offset": 0, 00:10:54.717 "data_size": 65536 00:10:54.717 }, 00:10:54.717 { 00:10:54.717 "name": "BaseBdev3", 00:10:54.717 "uuid": "eaae4bad-b734-45a9-97b8-4a720d820207", 00:10:54.717 "is_configured": true, 00:10:54.717 "data_offset": 0, 00:10:54.717 "data_size": 65536 00:10:54.717 } 00:10:54.717 ] 00:10:54.717 }' 00:10:54.717 16:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.717 16:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.286 16:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:55.286 16:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:55.286 16:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:55.286 16:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:55.286 16:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:55.286 16:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:55.286 16:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:55.286 16:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:55.286 16:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.286 16:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.286 [2024-11-05 16:25:08.182869] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:55.286 16:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.286 16:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:55.286 "name": "Existed_Raid", 00:10:55.286 "aliases": [ 00:10:55.286 "1b156182-b7ec-46d1-b88d-fa866d10f859" 00:10:55.286 ], 00:10:55.286 "product_name": "Raid Volume", 00:10:55.286 "block_size": 512, 00:10:55.286 "num_blocks": 196608, 00:10:55.286 "uuid": "1b156182-b7ec-46d1-b88d-fa866d10f859", 00:10:55.286 "assigned_rate_limits": { 00:10:55.286 "rw_ios_per_sec": 0, 00:10:55.286 "rw_mbytes_per_sec": 0, 00:10:55.286 "r_mbytes_per_sec": 0, 00:10:55.286 "w_mbytes_per_sec": 0 00:10:55.286 }, 00:10:55.286 "claimed": false, 00:10:55.286 "zoned": false, 00:10:55.286 "supported_io_types": { 00:10:55.286 "read": true, 00:10:55.286 "write": true, 00:10:55.286 "unmap": true, 00:10:55.286 "flush": true, 00:10:55.286 "reset": true, 00:10:55.286 "nvme_admin": false, 00:10:55.286 "nvme_io": false, 00:10:55.286 "nvme_io_md": false, 00:10:55.286 "write_zeroes": true, 00:10:55.286 "zcopy": false, 00:10:55.286 "get_zone_info": false, 00:10:55.286 "zone_management": false, 00:10:55.286 "zone_append": false, 00:10:55.286 "compare": false, 00:10:55.286 "compare_and_write": false, 00:10:55.286 "abort": false, 00:10:55.286 "seek_hole": false, 00:10:55.286 "seek_data": false, 00:10:55.286 "copy": false, 00:10:55.286 "nvme_iov_md": false 00:10:55.286 }, 00:10:55.286 "memory_domains": [ 00:10:55.286 { 00:10:55.286 "dma_device_id": "system", 00:10:55.286 "dma_device_type": 1 00:10:55.286 }, 00:10:55.286 { 00:10:55.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.286 "dma_device_type": 2 00:10:55.286 }, 00:10:55.286 { 00:10:55.286 "dma_device_id": "system", 00:10:55.286 "dma_device_type": 1 00:10:55.286 }, 00:10:55.286 { 00:10:55.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.286 "dma_device_type": 2 00:10:55.286 }, 00:10:55.286 { 00:10:55.286 "dma_device_id": "system", 00:10:55.286 "dma_device_type": 1 00:10:55.286 }, 00:10:55.286 { 00:10:55.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.286 "dma_device_type": 2 00:10:55.286 } 00:10:55.286 ], 00:10:55.286 "driver_specific": { 00:10:55.286 "raid": { 00:10:55.286 "uuid": "1b156182-b7ec-46d1-b88d-fa866d10f859", 00:10:55.286 "strip_size_kb": 64, 00:10:55.286 "state": "online", 00:10:55.286 "raid_level": "concat", 00:10:55.286 "superblock": false, 00:10:55.286 "num_base_bdevs": 3, 00:10:55.286 "num_base_bdevs_discovered": 3, 00:10:55.286 "num_base_bdevs_operational": 3, 00:10:55.286 "base_bdevs_list": [ 00:10:55.286 { 00:10:55.286 "name": "NewBaseBdev", 00:10:55.286 "uuid": "33fadb2c-4668-497d-80c9-2bcfbb381200", 00:10:55.286 "is_configured": true, 00:10:55.286 "data_offset": 0, 00:10:55.286 "data_size": 65536 00:10:55.286 }, 00:10:55.286 { 00:10:55.286 "name": "BaseBdev2", 00:10:55.286 "uuid": "c69d6eba-9c82-490b-a461-c921d90516e0", 00:10:55.286 "is_configured": true, 00:10:55.286 "data_offset": 0, 00:10:55.286 "data_size": 65536 00:10:55.286 }, 00:10:55.286 { 00:10:55.286 "name": "BaseBdev3", 00:10:55.286 "uuid": "eaae4bad-b734-45a9-97b8-4a720d820207", 00:10:55.286 "is_configured": true, 00:10:55.286 "data_offset": 0, 00:10:55.286 "data_size": 65536 00:10:55.286 } 00:10:55.286 ] 00:10:55.286 } 00:10:55.286 } 00:10:55.286 }' 00:10:55.286 16:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:55.286 16:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:55.286 BaseBdev2 00:10:55.286 BaseBdev3' 00:10:55.286 16:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.286 16:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:55.286 16:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.286 16:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:55.286 16:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.286 16:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.286 16:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.286 16:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.286 16:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.286 16:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.286 16:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.286 16:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:55.286 16:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.286 16:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.286 16:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.546 16:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.546 16:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.546 16:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.546 16:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.546 16:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:55.546 16:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.546 16:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.546 16:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.546 16:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.546 16:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.546 16:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.546 16:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:55.546 16:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.546 16:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.546 [2024-11-05 16:25:08.470059] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:55.546 [2024-11-05 16:25:08.470095] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:55.546 [2024-11-05 16:25:08.470190] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:55.546 [2024-11-05 16:25:08.470255] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:55.546 [2024-11-05 16:25:08.470269] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:55.546 16:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.546 16:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65857 00:10:55.546 16:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 65857 ']' 00:10:55.546 16:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 65857 00:10:55.546 16:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:10:55.546 16:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:55.546 16:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65857 00:10:55.546 16:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:55.546 16:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:55.546 16:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65857' 00:10:55.546 killing process with pid 65857 00:10:55.546 16:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 65857 00:10:55.546 [2024-11-05 16:25:08.521326] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:55.546 16:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 65857 00:10:55.806 [2024-11-05 16:25:08.850903] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:57.183 16:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:57.183 00:10:57.183 real 0m10.978s 00:10:57.183 user 0m17.446s 00:10:57.183 sys 0m1.877s 00:10:57.183 16:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:57.183 16:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.183 ************************************ 00:10:57.183 END TEST raid_state_function_test 00:10:57.183 ************************************ 00:10:57.183 16:25:10 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:10:57.183 16:25:10 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:57.183 16:25:10 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:57.183 16:25:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:57.183 ************************************ 00:10:57.183 START TEST raid_state_function_test_sb 00:10:57.183 ************************************ 00:10:57.183 16:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 3 true 00:10:57.183 16:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:57.183 16:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:57.183 16:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:57.183 16:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:57.183 16:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:57.183 16:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:57.183 16:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:57.183 16:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:57.183 16:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:57.183 16:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:57.183 16:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:57.183 16:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:57.183 16:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:57.183 16:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:57.183 16:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:57.183 16:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:57.183 16:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:57.183 16:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:57.183 16:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:57.183 16:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:57.183 16:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:57.183 16:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:57.183 16:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:57.183 16:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:57.183 16:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:57.183 16:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:57.183 Process raid pid: 66486 00:10:57.183 16:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66486 00:10:57.183 16:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:57.183 16:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66486' 00:10:57.183 16:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66486 00:10:57.183 16:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 66486 ']' 00:10:57.183 16:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.183 16:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:57.183 16:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.183 16:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:57.183 16:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.183 [2024-11-05 16:25:10.179686] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:10:57.183 [2024-11-05 16:25:10.179805] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:57.442 [2024-11-05 16:25:10.354980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.442 [2024-11-05 16:25:10.474653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.702 [2024-11-05 16:25:10.677861] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:57.702 [2024-11-05 16:25:10.677922] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:57.963 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:57.963 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:10:57.963 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:57.964 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.964 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.964 [2024-11-05 16:25:11.048339] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:57.964 [2024-11-05 16:25:11.048396] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:57.964 [2024-11-05 16:25:11.048407] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:57.964 [2024-11-05 16:25:11.048418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:57.964 [2024-11-05 16:25:11.048424] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:57.964 [2024-11-05 16:25:11.048433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:58.222 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.222 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:58.222 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.222 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.222 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:58.222 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.222 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:58.222 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.222 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.222 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.222 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.222 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.222 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.222 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.222 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.222 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.222 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.222 "name": "Existed_Raid", 00:10:58.222 "uuid": "c6c6e703-0f42-472a-a7a3-76e18cc8f34b", 00:10:58.222 "strip_size_kb": 64, 00:10:58.222 "state": "configuring", 00:10:58.222 "raid_level": "concat", 00:10:58.222 "superblock": true, 00:10:58.222 "num_base_bdevs": 3, 00:10:58.222 "num_base_bdevs_discovered": 0, 00:10:58.222 "num_base_bdevs_operational": 3, 00:10:58.222 "base_bdevs_list": [ 00:10:58.222 { 00:10:58.222 "name": "BaseBdev1", 00:10:58.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.222 "is_configured": false, 00:10:58.222 "data_offset": 0, 00:10:58.222 "data_size": 0 00:10:58.222 }, 00:10:58.222 { 00:10:58.222 "name": "BaseBdev2", 00:10:58.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.222 "is_configured": false, 00:10:58.222 "data_offset": 0, 00:10:58.222 "data_size": 0 00:10:58.222 }, 00:10:58.222 { 00:10:58.222 "name": "BaseBdev3", 00:10:58.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.222 "is_configured": false, 00:10:58.222 "data_offset": 0, 00:10:58.222 "data_size": 0 00:10:58.222 } 00:10:58.222 ] 00:10:58.222 }' 00:10:58.222 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.222 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.481 [2024-11-05 16:25:11.435658] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:58.481 [2024-11-05 16:25:11.435775] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.481 [2024-11-05 16:25:11.447639] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:58.481 [2024-11-05 16:25:11.447732] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:58.481 [2024-11-05 16:25:11.447765] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:58.481 [2024-11-05 16:25:11.447792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:58.481 [2024-11-05 16:25:11.447813] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:58.481 [2024-11-05 16:25:11.447837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.481 [2024-11-05 16:25:11.495540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:58.481 BaseBdev1 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.481 [ 00:10:58.481 { 00:10:58.481 "name": "BaseBdev1", 00:10:58.481 "aliases": [ 00:10:58.481 "b57cc376-b65a-43d4-9bd7-3caa260660e8" 00:10:58.481 ], 00:10:58.481 "product_name": "Malloc disk", 00:10:58.481 "block_size": 512, 00:10:58.481 "num_blocks": 65536, 00:10:58.481 "uuid": "b57cc376-b65a-43d4-9bd7-3caa260660e8", 00:10:58.481 "assigned_rate_limits": { 00:10:58.481 "rw_ios_per_sec": 0, 00:10:58.481 "rw_mbytes_per_sec": 0, 00:10:58.481 "r_mbytes_per_sec": 0, 00:10:58.481 "w_mbytes_per_sec": 0 00:10:58.481 }, 00:10:58.481 "claimed": true, 00:10:58.481 "claim_type": "exclusive_write", 00:10:58.481 "zoned": false, 00:10:58.481 "supported_io_types": { 00:10:58.481 "read": true, 00:10:58.481 "write": true, 00:10:58.481 "unmap": true, 00:10:58.481 "flush": true, 00:10:58.481 "reset": true, 00:10:58.481 "nvme_admin": false, 00:10:58.481 "nvme_io": false, 00:10:58.481 "nvme_io_md": false, 00:10:58.481 "write_zeroes": true, 00:10:58.481 "zcopy": true, 00:10:58.481 "get_zone_info": false, 00:10:58.481 "zone_management": false, 00:10:58.481 "zone_append": false, 00:10:58.481 "compare": false, 00:10:58.481 "compare_and_write": false, 00:10:58.481 "abort": true, 00:10:58.481 "seek_hole": false, 00:10:58.481 "seek_data": false, 00:10:58.481 "copy": true, 00:10:58.481 "nvme_iov_md": false 00:10:58.481 }, 00:10:58.481 "memory_domains": [ 00:10:58.481 { 00:10:58.481 "dma_device_id": "system", 00:10:58.481 "dma_device_type": 1 00:10:58.481 }, 00:10:58.481 { 00:10:58.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.481 "dma_device_type": 2 00:10:58.481 } 00:10:58.481 ], 00:10:58.481 "driver_specific": {} 00:10:58.481 } 00:10:58.481 ] 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.481 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.740 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.740 "name": "Existed_Raid", 00:10:58.740 "uuid": "78d48168-722f-4536-a124-68bde598fb72", 00:10:58.740 "strip_size_kb": 64, 00:10:58.740 "state": "configuring", 00:10:58.740 "raid_level": "concat", 00:10:58.740 "superblock": true, 00:10:58.740 "num_base_bdevs": 3, 00:10:58.740 "num_base_bdevs_discovered": 1, 00:10:58.740 "num_base_bdevs_operational": 3, 00:10:58.740 "base_bdevs_list": [ 00:10:58.740 { 00:10:58.740 "name": "BaseBdev1", 00:10:58.740 "uuid": "b57cc376-b65a-43d4-9bd7-3caa260660e8", 00:10:58.740 "is_configured": true, 00:10:58.740 "data_offset": 2048, 00:10:58.740 "data_size": 63488 00:10:58.740 }, 00:10:58.740 { 00:10:58.740 "name": "BaseBdev2", 00:10:58.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.740 "is_configured": false, 00:10:58.740 "data_offset": 0, 00:10:58.740 "data_size": 0 00:10:58.740 }, 00:10:58.740 { 00:10:58.740 "name": "BaseBdev3", 00:10:58.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.740 "is_configured": false, 00:10:58.740 "data_offset": 0, 00:10:58.740 "data_size": 0 00:10:58.740 } 00:10:58.740 ] 00:10:58.740 }' 00:10:58.740 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.740 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.000 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:59.000 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.000 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.000 [2024-11-05 16:25:11.982766] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:59.000 [2024-11-05 16:25:11.982825] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:59.000 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.000 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:59.000 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.000 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.000 [2024-11-05 16:25:11.990815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:59.000 [2024-11-05 16:25:11.992900] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:59.000 [2024-11-05 16:25:11.992978] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:59.000 [2024-11-05 16:25:11.993018] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:59.000 [2024-11-05 16:25:11.993047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:59.000 16:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.000 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:59.000 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:59.000 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:59.000 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.000 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.000 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:59.000 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.001 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:59.001 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.001 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.001 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.001 16:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.001 16:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.001 16:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.001 16:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.001 16:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.001 16:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.001 16:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.001 "name": "Existed_Raid", 00:10:59.001 "uuid": "0df1cb1c-6b08-4828-9ed8-a45d9077d493", 00:10:59.001 "strip_size_kb": 64, 00:10:59.001 "state": "configuring", 00:10:59.001 "raid_level": "concat", 00:10:59.001 "superblock": true, 00:10:59.001 "num_base_bdevs": 3, 00:10:59.001 "num_base_bdevs_discovered": 1, 00:10:59.001 "num_base_bdevs_operational": 3, 00:10:59.001 "base_bdevs_list": [ 00:10:59.001 { 00:10:59.001 "name": "BaseBdev1", 00:10:59.001 "uuid": "b57cc376-b65a-43d4-9bd7-3caa260660e8", 00:10:59.001 "is_configured": true, 00:10:59.001 "data_offset": 2048, 00:10:59.001 "data_size": 63488 00:10:59.001 }, 00:10:59.001 { 00:10:59.001 "name": "BaseBdev2", 00:10:59.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.001 "is_configured": false, 00:10:59.001 "data_offset": 0, 00:10:59.001 "data_size": 0 00:10:59.001 }, 00:10:59.001 { 00:10:59.001 "name": "BaseBdev3", 00:10:59.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.001 "is_configured": false, 00:10:59.001 "data_offset": 0, 00:10:59.001 "data_size": 0 00:10:59.001 } 00:10:59.001 ] 00:10:59.001 }' 00:10:59.001 16:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.001 16:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.570 16:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:59.570 16:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.570 16:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.570 [2024-11-05 16:25:12.473097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:59.570 BaseBdev2 00:10:59.570 16:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.570 16:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:59.570 16:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:59.570 16:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:59.570 16:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:59.570 16:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:59.570 16:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:59.570 16:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:59.570 16:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.570 16:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.570 16:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.570 16:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:59.570 16:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.570 16:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.570 [ 00:10:59.570 { 00:10:59.570 "name": "BaseBdev2", 00:10:59.570 "aliases": [ 00:10:59.570 "62191b8a-28fb-4df2-b260-a58ea7846d75" 00:10:59.570 ], 00:10:59.570 "product_name": "Malloc disk", 00:10:59.570 "block_size": 512, 00:10:59.570 "num_blocks": 65536, 00:10:59.570 "uuid": "62191b8a-28fb-4df2-b260-a58ea7846d75", 00:10:59.570 "assigned_rate_limits": { 00:10:59.570 "rw_ios_per_sec": 0, 00:10:59.570 "rw_mbytes_per_sec": 0, 00:10:59.570 "r_mbytes_per_sec": 0, 00:10:59.570 "w_mbytes_per_sec": 0 00:10:59.570 }, 00:10:59.570 "claimed": true, 00:10:59.570 "claim_type": "exclusive_write", 00:10:59.570 "zoned": false, 00:10:59.570 "supported_io_types": { 00:10:59.570 "read": true, 00:10:59.570 "write": true, 00:10:59.570 "unmap": true, 00:10:59.570 "flush": true, 00:10:59.570 "reset": true, 00:10:59.570 "nvme_admin": false, 00:10:59.570 "nvme_io": false, 00:10:59.570 "nvme_io_md": false, 00:10:59.570 "write_zeroes": true, 00:10:59.570 "zcopy": true, 00:10:59.570 "get_zone_info": false, 00:10:59.570 "zone_management": false, 00:10:59.570 "zone_append": false, 00:10:59.570 "compare": false, 00:10:59.570 "compare_and_write": false, 00:10:59.570 "abort": true, 00:10:59.570 "seek_hole": false, 00:10:59.570 "seek_data": false, 00:10:59.570 "copy": true, 00:10:59.570 "nvme_iov_md": false 00:10:59.570 }, 00:10:59.570 "memory_domains": [ 00:10:59.570 { 00:10:59.570 "dma_device_id": "system", 00:10:59.570 "dma_device_type": 1 00:10:59.570 }, 00:10:59.570 { 00:10:59.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.570 "dma_device_type": 2 00:10:59.570 } 00:10:59.570 ], 00:10:59.570 "driver_specific": {} 00:10:59.570 } 00:10:59.570 ] 00:10:59.570 16:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.570 16:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:59.570 16:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:59.570 16:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:59.570 16:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:59.570 16:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.570 16:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.570 16:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:59.570 16:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.570 16:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:59.570 16:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.570 16:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.570 16:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.570 16:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.570 16:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.570 16:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.570 16:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.570 16:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.570 16:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.570 16:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.570 "name": "Existed_Raid", 00:10:59.570 "uuid": "0df1cb1c-6b08-4828-9ed8-a45d9077d493", 00:10:59.570 "strip_size_kb": 64, 00:10:59.570 "state": "configuring", 00:10:59.570 "raid_level": "concat", 00:10:59.570 "superblock": true, 00:10:59.570 "num_base_bdevs": 3, 00:10:59.570 "num_base_bdevs_discovered": 2, 00:10:59.570 "num_base_bdevs_operational": 3, 00:10:59.570 "base_bdevs_list": [ 00:10:59.570 { 00:10:59.570 "name": "BaseBdev1", 00:10:59.570 "uuid": "b57cc376-b65a-43d4-9bd7-3caa260660e8", 00:10:59.570 "is_configured": true, 00:10:59.570 "data_offset": 2048, 00:10:59.570 "data_size": 63488 00:10:59.570 }, 00:10:59.570 { 00:10:59.570 "name": "BaseBdev2", 00:10:59.570 "uuid": "62191b8a-28fb-4df2-b260-a58ea7846d75", 00:10:59.570 "is_configured": true, 00:10:59.570 "data_offset": 2048, 00:10:59.570 "data_size": 63488 00:10:59.570 }, 00:10:59.570 { 00:10:59.570 "name": "BaseBdev3", 00:10:59.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.570 "is_configured": false, 00:10:59.570 "data_offset": 0, 00:10:59.570 "data_size": 0 00:10:59.570 } 00:10:59.570 ] 00:10:59.570 }' 00:10:59.570 16:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.570 16:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.140 16:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:00.140 16:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.140 16:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.140 [2024-11-05 16:25:12.991552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:00.140 [2024-11-05 16:25:12.991930] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:00.140 [2024-11-05 16:25:12.991997] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:00.140 [2024-11-05 16:25:12.992298] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:00.140 [2024-11-05 16:25:12.992550] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:00.140 [2024-11-05 16:25:12.992601] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:00.140 BaseBdev3 00:11:00.140 [2024-11-05 16:25:12.992817] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.140 16:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.140 16:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:00.140 16:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:00.140 16:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:00.140 16:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:00.140 16:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:00.140 16:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:00.140 16:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:00.140 16:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.140 16:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.140 16:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.140 16:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:00.140 16:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.140 16:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.140 [ 00:11:00.140 { 00:11:00.140 "name": "BaseBdev3", 00:11:00.140 "aliases": [ 00:11:00.140 "1c00e3a3-62c4-417f-9a5e-976e849bdc0e" 00:11:00.140 ], 00:11:00.140 "product_name": "Malloc disk", 00:11:00.140 "block_size": 512, 00:11:00.140 "num_blocks": 65536, 00:11:00.140 "uuid": "1c00e3a3-62c4-417f-9a5e-976e849bdc0e", 00:11:00.140 "assigned_rate_limits": { 00:11:00.140 "rw_ios_per_sec": 0, 00:11:00.140 "rw_mbytes_per_sec": 0, 00:11:00.140 "r_mbytes_per_sec": 0, 00:11:00.140 "w_mbytes_per_sec": 0 00:11:00.140 }, 00:11:00.140 "claimed": true, 00:11:00.140 "claim_type": "exclusive_write", 00:11:00.140 "zoned": false, 00:11:00.140 "supported_io_types": { 00:11:00.140 "read": true, 00:11:00.140 "write": true, 00:11:00.140 "unmap": true, 00:11:00.140 "flush": true, 00:11:00.140 "reset": true, 00:11:00.140 "nvme_admin": false, 00:11:00.140 "nvme_io": false, 00:11:00.140 "nvme_io_md": false, 00:11:00.140 "write_zeroes": true, 00:11:00.140 "zcopy": true, 00:11:00.140 "get_zone_info": false, 00:11:00.140 "zone_management": false, 00:11:00.140 "zone_append": false, 00:11:00.140 "compare": false, 00:11:00.140 "compare_and_write": false, 00:11:00.140 "abort": true, 00:11:00.140 "seek_hole": false, 00:11:00.140 "seek_data": false, 00:11:00.140 "copy": true, 00:11:00.140 "nvme_iov_md": false 00:11:00.140 }, 00:11:00.140 "memory_domains": [ 00:11:00.140 { 00:11:00.140 "dma_device_id": "system", 00:11:00.140 "dma_device_type": 1 00:11:00.140 }, 00:11:00.140 { 00:11:00.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.140 "dma_device_type": 2 00:11:00.140 } 00:11:00.140 ], 00:11:00.140 "driver_specific": {} 00:11:00.140 } 00:11:00.140 ] 00:11:00.140 16:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.140 16:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:00.140 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:00.140 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:00.140 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:00.140 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.140 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.140 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.140 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.141 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:00.141 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.141 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.141 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.141 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.141 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.141 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.141 16:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.141 16:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.141 16:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.141 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.141 "name": "Existed_Raid", 00:11:00.141 "uuid": "0df1cb1c-6b08-4828-9ed8-a45d9077d493", 00:11:00.141 "strip_size_kb": 64, 00:11:00.141 "state": "online", 00:11:00.141 "raid_level": "concat", 00:11:00.141 "superblock": true, 00:11:00.141 "num_base_bdevs": 3, 00:11:00.141 "num_base_bdevs_discovered": 3, 00:11:00.141 "num_base_bdevs_operational": 3, 00:11:00.141 "base_bdevs_list": [ 00:11:00.141 { 00:11:00.141 "name": "BaseBdev1", 00:11:00.141 "uuid": "b57cc376-b65a-43d4-9bd7-3caa260660e8", 00:11:00.141 "is_configured": true, 00:11:00.141 "data_offset": 2048, 00:11:00.141 "data_size": 63488 00:11:00.141 }, 00:11:00.141 { 00:11:00.141 "name": "BaseBdev2", 00:11:00.141 "uuid": "62191b8a-28fb-4df2-b260-a58ea7846d75", 00:11:00.141 "is_configured": true, 00:11:00.141 "data_offset": 2048, 00:11:00.141 "data_size": 63488 00:11:00.141 }, 00:11:00.141 { 00:11:00.141 "name": "BaseBdev3", 00:11:00.141 "uuid": "1c00e3a3-62c4-417f-9a5e-976e849bdc0e", 00:11:00.141 "is_configured": true, 00:11:00.141 "data_offset": 2048, 00:11:00.141 "data_size": 63488 00:11:00.141 } 00:11:00.141 ] 00:11:00.141 }' 00:11:00.141 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.141 16:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.710 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:00.710 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:00.710 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:00.710 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:00.711 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:00.711 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:00.711 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:00.711 16:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.711 16:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.711 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:00.711 [2024-11-05 16:25:13.558944] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:00.711 16:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.711 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:00.711 "name": "Existed_Raid", 00:11:00.711 "aliases": [ 00:11:00.711 "0df1cb1c-6b08-4828-9ed8-a45d9077d493" 00:11:00.711 ], 00:11:00.711 "product_name": "Raid Volume", 00:11:00.711 "block_size": 512, 00:11:00.711 "num_blocks": 190464, 00:11:00.711 "uuid": "0df1cb1c-6b08-4828-9ed8-a45d9077d493", 00:11:00.711 "assigned_rate_limits": { 00:11:00.711 "rw_ios_per_sec": 0, 00:11:00.711 "rw_mbytes_per_sec": 0, 00:11:00.711 "r_mbytes_per_sec": 0, 00:11:00.711 "w_mbytes_per_sec": 0 00:11:00.711 }, 00:11:00.711 "claimed": false, 00:11:00.711 "zoned": false, 00:11:00.711 "supported_io_types": { 00:11:00.711 "read": true, 00:11:00.711 "write": true, 00:11:00.711 "unmap": true, 00:11:00.711 "flush": true, 00:11:00.711 "reset": true, 00:11:00.711 "nvme_admin": false, 00:11:00.711 "nvme_io": false, 00:11:00.711 "nvme_io_md": false, 00:11:00.711 "write_zeroes": true, 00:11:00.711 "zcopy": false, 00:11:00.711 "get_zone_info": false, 00:11:00.711 "zone_management": false, 00:11:00.711 "zone_append": false, 00:11:00.711 "compare": false, 00:11:00.711 "compare_and_write": false, 00:11:00.711 "abort": false, 00:11:00.711 "seek_hole": false, 00:11:00.711 "seek_data": false, 00:11:00.711 "copy": false, 00:11:00.711 "nvme_iov_md": false 00:11:00.711 }, 00:11:00.711 "memory_domains": [ 00:11:00.711 { 00:11:00.711 "dma_device_id": "system", 00:11:00.711 "dma_device_type": 1 00:11:00.711 }, 00:11:00.711 { 00:11:00.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.711 "dma_device_type": 2 00:11:00.711 }, 00:11:00.711 { 00:11:00.711 "dma_device_id": "system", 00:11:00.711 "dma_device_type": 1 00:11:00.711 }, 00:11:00.711 { 00:11:00.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.711 "dma_device_type": 2 00:11:00.711 }, 00:11:00.711 { 00:11:00.711 "dma_device_id": "system", 00:11:00.711 "dma_device_type": 1 00:11:00.711 }, 00:11:00.711 { 00:11:00.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.711 "dma_device_type": 2 00:11:00.711 } 00:11:00.711 ], 00:11:00.711 "driver_specific": { 00:11:00.711 "raid": { 00:11:00.711 "uuid": "0df1cb1c-6b08-4828-9ed8-a45d9077d493", 00:11:00.711 "strip_size_kb": 64, 00:11:00.711 "state": "online", 00:11:00.711 "raid_level": "concat", 00:11:00.711 "superblock": true, 00:11:00.711 "num_base_bdevs": 3, 00:11:00.711 "num_base_bdevs_discovered": 3, 00:11:00.711 "num_base_bdevs_operational": 3, 00:11:00.711 "base_bdevs_list": [ 00:11:00.711 { 00:11:00.711 "name": "BaseBdev1", 00:11:00.711 "uuid": "b57cc376-b65a-43d4-9bd7-3caa260660e8", 00:11:00.711 "is_configured": true, 00:11:00.711 "data_offset": 2048, 00:11:00.711 "data_size": 63488 00:11:00.711 }, 00:11:00.711 { 00:11:00.711 "name": "BaseBdev2", 00:11:00.711 "uuid": "62191b8a-28fb-4df2-b260-a58ea7846d75", 00:11:00.711 "is_configured": true, 00:11:00.711 "data_offset": 2048, 00:11:00.711 "data_size": 63488 00:11:00.711 }, 00:11:00.711 { 00:11:00.711 "name": "BaseBdev3", 00:11:00.711 "uuid": "1c00e3a3-62c4-417f-9a5e-976e849bdc0e", 00:11:00.711 "is_configured": true, 00:11:00.711 "data_offset": 2048, 00:11:00.711 "data_size": 63488 00:11:00.711 } 00:11:00.711 ] 00:11:00.711 } 00:11:00.711 } 00:11:00.711 }' 00:11:00.711 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:00.711 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:00.711 BaseBdev2 00:11:00.711 BaseBdev3' 00:11:00.711 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.711 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:00.711 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.711 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.711 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:00.711 16:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.711 16:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.711 16:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.711 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.711 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.711 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.711 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:00.711 16:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.711 16:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.711 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.711 16:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.711 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.712 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.712 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.971 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.971 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:00.971 16:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.971 16:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.971 16:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.971 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.971 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.971 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:00.971 16:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.971 16:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.972 [2024-11-05 16:25:13.858193] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:00.972 [2024-11-05 16:25:13.858228] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:00.972 [2024-11-05 16:25:13.858286] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:00.972 16:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.972 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:00.972 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:00.972 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:00.972 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:00.972 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:00.972 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:11:00.972 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.972 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:00.972 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.972 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.972 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:00.972 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.972 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.972 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.972 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.972 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.972 16:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.972 16:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.972 16:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.972 16:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.972 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.972 "name": "Existed_Raid", 00:11:00.972 "uuid": "0df1cb1c-6b08-4828-9ed8-a45d9077d493", 00:11:00.972 "strip_size_kb": 64, 00:11:00.972 "state": "offline", 00:11:00.972 "raid_level": "concat", 00:11:00.972 "superblock": true, 00:11:00.972 "num_base_bdevs": 3, 00:11:00.972 "num_base_bdevs_discovered": 2, 00:11:00.972 "num_base_bdevs_operational": 2, 00:11:00.972 "base_bdevs_list": [ 00:11:00.972 { 00:11:00.972 "name": null, 00:11:00.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.972 "is_configured": false, 00:11:00.972 "data_offset": 0, 00:11:00.972 "data_size": 63488 00:11:00.972 }, 00:11:00.972 { 00:11:00.972 "name": "BaseBdev2", 00:11:00.972 "uuid": "62191b8a-28fb-4df2-b260-a58ea7846d75", 00:11:00.972 "is_configured": true, 00:11:00.972 "data_offset": 2048, 00:11:00.972 "data_size": 63488 00:11:00.972 }, 00:11:00.972 { 00:11:00.972 "name": "BaseBdev3", 00:11:00.972 "uuid": "1c00e3a3-62c4-417f-9a5e-976e849bdc0e", 00:11:00.972 "is_configured": true, 00:11:00.972 "data_offset": 2048, 00:11:00.972 "data_size": 63488 00:11:00.972 } 00:11:00.972 ] 00:11:00.972 }' 00:11:00.972 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.972 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.538 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:01.538 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:01.538 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.538 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:01.538 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.538 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.538 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.538 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:01.538 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:01.538 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:01.538 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.538 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.538 [2024-11-05 16:25:14.459710] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:01.538 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.538 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:01.538 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:01.538 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.538 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:01.538 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.538 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.538 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.538 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:01.538 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:01.538 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:01.538 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.538 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.538 [2024-11-05 16:25:14.619766] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:01.538 [2024-11-05 16:25:14.619819] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:01.797 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.797 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:01.797 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:01.797 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.797 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.798 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:01.798 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.798 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.798 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:01.798 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:01.798 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:01.798 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:01.798 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:01.798 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:01.798 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.798 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.798 BaseBdev2 00:11:01.798 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.798 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:01.798 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:01.798 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:01.798 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:01.798 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:01.798 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:01.798 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:01.798 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.798 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.798 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.798 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:01.798 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.798 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.798 [ 00:11:01.798 { 00:11:01.798 "name": "BaseBdev2", 00:11:01.798 "aliases": [ 00:11:01.798 "2ea7a06a-20e4-4ed1-8d79-ee26f1ff240d" 00:11:01.798 ], 00:11:01.798 "product_name": "Malloc disk", 00:11:01.798 "block_size": 512, 00:11:01.798 "num_blocks": 65536, 00:11:01.798 "uuid": "2ea7a06a-20e4-4ed1-8d79-ee26f1ff240d", 00:11:01.798 "assigned_rate_limits": { 00:11:01.798 "rw_ios_per_sec": 0, 00:11:01.798 "rw_mbytes_per_sec": 0, 00:11:01.798 "r_mbytes_per_sec": 0, 00:11:01.798 "w_mbytes_per_sec": 0 00:11:01.798 }, 00:11:01.798 "claimed": false, 00:11:01.798 "zoned": false, 00:11:01.798 "supported_io_types": { 00:11:01.798 "read": true, 00:11:01.798 "write": true, 00:11:01.798 "unmap": true, 00:11:01.798 "flush": true, 00:11:01.798 "reset": true, 00:11:01.798 "nvme_admin": false, 00:11:01.798 "nvme_io": false, 00:11:01.798 "nvme_io_md": false, 00:11:01.798 "write_zeroes": true, 00:11:01.798 "zcopy": true, 00:11:01.798 "get_zone_info": false, 00:11:01.798 "zone_management": false, 00:11:01.798 "zone_append": false, 00:11:01.798 "compare": false, 00:11:01.798 "compare_and_write": false, 00:11:01.798 "abort": true, 00:11:01.798 "seek_hole": false, 00:11:01.798 "seek_data": false, 00:11:01.798 "copy": true, 00:11:01.798 "nvme_iov_md": false 00:11:01.798 }, 00:11:01.798 "memory_domains": [ 00:11:01.798 { 00:11:01.798 "dma_device_id": "system", 00:11:01.798 "dma_device_type": 1 00:11:01.798 }, 00:11:01.798 { 00:11:01.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.798 "dma_device_type": 2 00:11:01.798 } 00:11:01.798 ], 00:11:01.798 "driver_specific": {} 00:11:01.798 } 00:11:01.798 ] 00:11:01.798 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.798 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:01.798 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:01.798 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:01.798 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:01.798 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.798 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.056 BaseBdev3 00:11:02.056 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.056 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:02.056 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:02.056 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:02.056 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:02.056 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:02.056 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:02.056 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:02.056 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.056 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.056 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.056 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:02.056 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.056 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.056 [ 00:11:02.056 { 00:11:02.056 "name": "BaseBdev3", 00:11:02.056 "aliases": [ 00:11:02.056 "3dbc1887-a2ea-4d16-841d-a896df030b23" 00:11:02.056 ], 00:11:02.056 "product_name": "Malloc disk", 00:11:02.056 "block_size": 512, 00:11:02.056 "num_blocks": 65536, 00:11:02.056 "uuid": "3dbc1887-a2ea-4d16-841d-a896df030b23", 00:11:02.056 "assigned_rate_limits": { 00:11:02.056 "rw_ios_per_sec": 0, 00:11:02.056 "rw_mbytes_per_sec": 0, 00:11:02.056 "r_mbytes_per_sec": 0, 00:11:02.056 "w_mbytes_per_sec": 0 00:11:02.056 }, 00:11:02.056 "claimed": false, 00:11:02.056 "zoned": false, 00:11:02.056 "supported_io_types": { 00:11:02.056 "read": true, 00:11:02.056 "write": true, 00:11:02.056 "unmap": true, 00:11:02.056 "flush": true, 00:11:02.056 "reset": true, 00:11:02.056 "nvme_admin": false, 00:11:02.056 "nvme_io": false, 00:11:02.056 "nvme_io_md": false, 00:11:02.056 "write_zeroes": true, 00:11:02.056 "zcopy": true, 00:11:02.056 "get_zone_info": false, 00:11:02.056 "zone_management": false, 00:11:02.056 "zone_append": false, 00:11:02.056 "compare": false, 00:11:02.056 "compare_and_write": false, 00:11:02.056 "abort": true, 00:11:02.056 "seek_hole": false, 00:11:02.056 "seek_data": false, 00:11:02.056 "copy": true, 00:11:02.056 "nvme_iov_md": false 00:11:02.056 }, 00:11:02.056 "memory_domains": [ 00:11:02.056 { 00:11:02.056 "dma_device_id": "system", 00:11:02.056 "dma_device_type": 1 00:11:02.056 }, 00:11:02.056 { 00:11:02.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.056 "dma_device_type": 2 00:11:02.056 } 00:11:02.056 ], 00:11:02.056 "driver_specific": {} 00:11:02.056 } 00:11:02.056 ] 00:11:02.056 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.056 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:02.056 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:02.056 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:02.056 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:02.056 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.056 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.056 [2024-11-05 16:25:14.951879] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:02.056 [2024-11-05 16:25:14.951995] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:02.057 [2024-11-05 16:25:14.952058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:02.057 [2024-11-05 16:25:14.954206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:02.057 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.057 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:02.057 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.057 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.057 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:02.057 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.057 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:02.057 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.057 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.057 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.057 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.057 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.057 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.057 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.057 16:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.057 16:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.057 16:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.057 "name": "Existed_Raid", 00:11:02.057 "uuid": "5e58f8e0-e9a1-4695-888e-2e6740b6f1ae", 00:11:02.057 "strip_size_kb": 64, 00:11:02.057 "state": "configuring", 00:11:02.057 "raid_level": "concat", 00:11:02.057 "superblock": true, 00:11:02.057 "num_base_bdevs": 3, 00:11:02.057 "num_base_bdevs_discovered": 2, 00:11:02.057 "num_base_bdevs_operational": 3, 00:11:02.057 "base_bdevs_list": [ 00:11:02.057 { 00:11:02.057 "name": "BaseBdev1", 00:11:02.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.057 "is_configured": false, 00:11:02.057 "data_offset": 0, 00:11:02.057 "data_size": 0 00:11:02.057 }, 00:11:02.057 { 00:11:02.057 "name": "BaseBdev2", 00:11:02.057 "uuid": "2ea7a06a-20e4-4ed1-8d79-ee26f1ff240d", 00:11:02.057 "is_configured": true, 00:11:02.057 "data_offset": 2048, 00:11:02.057 "data_size": 63488 00:11:02.057 }, 00:11:02.057 { 00:11:02.057 "name": "BaseBdev3", 00:11:02.057 "uuid": "3dbc1887-a2ea-4d16-841d-a896df030b23", 00:11:02.057 "is_configured": true, 00:11:02.057 "data_offset": 2048, 00:11:02.057 "data_size": 63488 00:11:02.057 } 00:11:02.057 ] 00:11:02.057 }' 00:11:02.057 16:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.057 16:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.623 16:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:02.623 16:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.623 16:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.623 [2024-11-05 16:25:15.419058] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:02.623 16:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.623 16:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:02.623 16:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.623 16:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.623 16:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:02.623 16:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.623 16:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:02.623 16:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.623 16:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.623 16:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.623 16:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.623 16:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.623 16:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.623 16:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.623 16:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.623 16:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.623 16:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.623 "name": "Existed_Raid", 00:11:02.623 "uuid": "5e58f8e0-e9a1-4695-888e-2e6740b6f1ae", 00:11:02.623 "strip_size_kb": 64, 00:11:02.623 "state": "configuring", 00:11:02.623 "raid_level": "concat", 00:11:02.623 "superblock": true, 00:11:02.623 "num_base_bdevs": 3, 00:11:02.623 "num_base_bdevs_discovered": 1, 00:11:02.623 "num_base_bdevs_operational": 3, 00:11:02.623 "base_bdevs_list": [ 00:11:02.623 { 00:11:02.623 "name": "BaseBdev1", 00:11:02.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.623 "is_configured": false, 00:11:02.623 "data_offset": 0, 00:11:02.623 "data_size": 0 00:11:02.623 }, 00:11:02.623 { 00:11:02.623 "name": null, 00:11:02.623 "uuid": "2ea7a06a-20e4-4ed1-8d79-ee26f1ff240d", 00:11:02.623 "is_configured": false, 00:11:02.623 "data_offset": 0, 00:11:02.623 "data_size": 63488 00:11:02.623 }, 00:11:02.623 { 00:11:02.623 "name": "BaseBdev3", 00:11:02.623 "uuid": "3dbc1887-a2ea-4d16-841d-a896df030b23", 00:11:02.623 "is_configured": true, 00:11:02.623 "data_offset": 2048, 00:11:02.623 "data_size": 63488 00:11:02.623 } 00:11:02.623 ] 00:11:02.623 }' 00:11:02.623 16:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.623 16:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.882 16:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.882 16:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:02.882 16:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.882 16:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.882 16:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.882 16:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:02.882 16:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:02.882 16:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.882 16:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.882 [2024-11-05 16:25:15.949916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:02.882 BaseBdev1 00:11:02.882 16:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.882 16:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:02.882 16:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:02.882 16:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:02.882 16:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:02.882 16:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:02.882 16:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:02.882 16:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:02.882 16:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.882 16:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.882 16:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.882 16:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:02.882 16:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.882 16:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.141 [ 00:11:03.141 { 00:11:03.141 "name": "BaseBdev1", 00:11:03.141 "aliases": [ 00:11:03.141 "a7048dce-1af0-4bd4-b0f9-1365f7f9b0f5" 00:11:03.141 ], 00:11:03.141 "product_name": "Malloc disk", 00:11:03.141 "block_size": 512, 00:11:03.141 "num_blocks": 65536, 00:11:03.141 "uuid": "a7048dce-1af0-4bd4-b0f9-1365f7f9b0f5", 00:11:03.141 "assigned_rate_limits": { 00:11:03.141 "rw_ios_per_sec": 0, 00:11:03.141 "rw_mbytes_per_sec": 0, 00:11:03.141 "r_mbytes_per_sec": 0, 00:11:03.141 "w_mbytes_per_sec": 0 00:11:03.141 }, 00:11:03.141 "claimed": true, 00:11:03.141 "claim_type": "exclusive_write", 00:11:03.141 "zoned": false, 00:11:03.141 "supported_io_types": { 00:11:03.141 "read": true, 00:11:03.141 "write": true, 00:11:03.141 "unmap": true, 00:11:03.141 "flush": true, 00:11:03.141 "reset": true, 00:11:03.141 "nvme_admin": false, 00:11:03.141 "nvme_io": false, 00:11:03.141 "nvme_io_md": false, 00:11:03.141 "write_zeroes": true, 00:11:03.141 "zcopy": true, 00:11:03.141 "get_zone_info": false, 00:11:03.141 "zone_management": false, 00:11:03.141 "zone_append": false, 00:11:03.141 "compare": false, 00:11:03.141 "compare_and_write": false, 00:11:03.141 "abort": true, 00:11:03.141 "seek_hole": false, 00:11:03.141 "seek_data": false, 00:11:03.141 "copy": true, 00:11:03.141 "nvme_iov_md": false 00:11:03.141 }, 00:11:03.141 "memory_domains": [ 00:11:03.141 { 00:11:03.141 "dma_device_id": "system", 00:11:03.141 "dma_device_type": 1 00:11:03.141 }, 00:11:03.141 { 00:11:03.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.141 "dma_device_type": 2 00:11:03.141 } 00:11:03.141 ], 00:11:03.141 "driver_specific": {} 00:11:03.141 } 00:11:03.141 ] 00:11:03.141 16:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.141 16:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:03.141 16:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:03.141 16:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.141 16:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.141 16:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:03.141 16:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.141 16:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:03.141 16:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.141 16:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.141 16:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.141 16:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.141 16:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.141 16:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.141 16:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.141 16:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.141 16:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.141 16:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.141 "name": "Existed_Raid", 00:11:03.141 "uuid": "5e58f8e0-e9a1-4695-888e-2e6740b6f1ae", 00:11:03.141 "strip_size_kb": 64, 00:11:03.141 "state": "configuring", 00:11:03.141 "raid_level": "concat", 00:11:03.141 "superblock": true, 00:11:03.141 "num_base_bdevs": 3, 00:11:03.141 "num_base_bdevs_discovered": 2, 00:11:03.141 "num_base_bdevs_operational": 3, 00:11:03.141 "base_bdevs_list": [ 00:11:03.141 { 00:11:03.141 "name": "BaseBdev1", 00:11:03.141 "uuid": "a7048dce-1af0-4bd4-b0f9-1365f7f9b0f5", 00:11:03.141 "is_configured": true, 00:11:03.141 "data_offset": 2048, 00:11:03.141 "data_size": 63488 00:11:03.141 }, 00:11:03.141 { 00:11:03.141 "name": null, 00:11:03.141 "uuid": "2ea7a06a-20e4-4ed1-8d79-ee26f1ff240d", 00:11:03.141 "is_configured": false, 00:11:03.141 "data_offset": 0, 00:11:03.141 "data_size": 63488 00:11:03.141 }, 00:11:03.141 { 00:11:03.141 "name": "BaseBdev3", 00:11:03.141 "uuid": "3dbc1887-a2ea-4d16-841d-a896df030b23", 00:11:03.141 "is_configured": true, 00:11:03.142 "data_offset": 2048, 00:11:03.142 "data_size": 63488 00:11:03.142 } 00:11:03.142 ] 00:11:03.142 }' 00:11:03.142 16:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.142 16:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.401 16:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.401 16:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:03.401 16:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.401 16:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.401 16:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.660 16:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:03.660 16:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:03.660 16:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.660 16:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.660 [2024-11-05 16:25:16.501077] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:03.660 16:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.660 16:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:03.660 16:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.660 16:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.660 16:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:03.660 16:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.660 16:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:03.660 16:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.660 16:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.660 16:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.660 16:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.660 16:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.660 16:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.660 16:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.660 16:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.660 16:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.660 16:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.660 "name": "Existed_Raid", 00:11:03.660 "uuid": "5e58f8e0-e9a1-4695-888e-2e6740b6f1ae", 00:11:03.660 "strip_size_kb": 64, 00:11:03.660 "state": "configuring", 00:11:03.660 "raid_level": "concat", 00:11:03.660 "superblock": true, 00:11:03.660 "num_base_bdevs": 3, 00:11:03.660 "num_base_bdevs_discovered": 1, 00:11:03.660 "num_base_bdevs_operational": 3, 00:11:03.660 "base_bdevs_list": [ 00:11:03.660 { 00:11:03.660 "name": "BaseBdev1", 00:11:03.660 "uuid": "a7048dce-1af0-4bd4-b0f9-1365f7f9b0f5", 00:11:03.660 "is_configured": true, 00:11:03.660 "data_offset": 2048, 00:11:03.660 "data_size": 63488 00:11:03.660 }, 00:11:03.660 { 00:11:03.660 "name": null, 00:11:03.660 "uuid": "2ea7a06a-20e4-4ed1-8d79-ee26f1ff240d", 00:11:03.660 "is_configured": false, 00:11:03.660 "data_offset": 0, 00:11:03.660 "data_size": 63488 00:11:03.660 }, 00:11:03.660 { 00:11:03.660 "name": null, 00:11:03.660 "uuid": "3dbc1887-a2ea-4d16-841d-a896df030b23", 00:11:03.660 "is_configured": false, 00:11:03.660 "data_offset": 0, 00:11:03.660 "data_size": 63488 00:11:03.660 } 00:11:03.661 ] 00:11:03.661 }' 00:11:03.661 16:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.661 16:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.920 16:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.920 16:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.920 16:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.920 16:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:03.920 16:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.920 16:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:03.920 16:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:03.920 16:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.920 16:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.920 [2024-11-05 16:25:16.988350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:03.920 16:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.920 16:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:03.920 16:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.920 16:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.920 16:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:03.920 16:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.920 16:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:03.920 16:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.920 16:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.920 16:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.920 16:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.920 16:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.920 16:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.920 16:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.920 16:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.180 16:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.180 16:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.180 "name": "Existed_Raid", 00:11:04.180 "uuid": "5e58f8e0-e9a1-4695-888e-2e6740b6f1ae", 00:11:04.180 "strip_size_kb": 64, 00:11:04.180 "state": "configuring", 00:11:04.180 "raid_level": "concat", 00:11:04.180 "superblock": true, 00:11:04.180 "num_base_bdevs": 3, 00:11:04.180 "num_base_bdevs_discovered": 2, 00:11:04.180 "num_base_bdevs_operational": 3, 00:11:04.180 "base_bdevs_list": [ 00:11:04.180 { 00:11:04.180 "name": "BaseBdev1", 00:11:04.180 "uuid": "a7048dce-1af0-4bd4-b0f9-1365f7f9b0f5", 00:11:04.180 "is_configured": true, 00:11:04.180 "data_offset": 2048, 00:11:04.180 "data_size": 63488 00:11:04.180 }, 00:11:04.180 { 00:11:04.180 "name": null, 00:11:04.180 "uuid": "2ea7a06a-20e4-4ed1-8d79-ee26f1ff240d", 00:11:04.180 "is_configured": false, 00:11:04.180 "data_offset": 0, 00:11:04.180 "data_size": 63488 00:11:04.180 }, 00:11:04.180 { 00:11:04.180 "name": "BaseBdev3", 00:11:04.180 "uuid": "3dbc1887-a2ea-4d16-841d-a896df030b23", 00:11:04.180 "is_configured": true, 00:11:04.180 "data_offset": 2048, 00:11:04.180 "data_size": 63488 00:11:04.180 } 00:11:04.180 ] 00:11:04.180 }' 00:11:04.180 16:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.180 16:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.440 16:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:04.440 16:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.440 16:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.440 16:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.440 16:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.440 16:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:04.440 16:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:04.440 16:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.440 16:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.440 [2024-11-05 16:25:17.463550] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:04.699 16:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.699 16:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:04.699 16:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.699 16:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.699 16:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:04.699 16:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.699 16:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:04.699 16:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.699 16:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.699 16:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.699 16:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.699 16:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.699 16:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.699 16:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.699 16:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.699 16:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.699 16:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.699 "name": "Existed_Raid", 00:11:04.699 "uuid": "5e58f8e0-e9a1-4695-888e-2e6740b6f1ae", 00:11:04.699 "strip_size_kb": 64, 00:11:04.699 "state": "configuring", 00:11:04.699 "raid_level": "concat", 00:11:04.699 "superblock": true, 00:11:04.699 "num_base_bdevs": 3, 00:11:04.699 "num_base_bdevs_discovered": 1, 00:11:04.699 "num_base_bdevs_operational": 3, 00:11:04.699 "base_bdevs_list": [ 00:11:04.699 { 00:11:04.699 "name": null, 00:11:04.699 "uuid": "a7048dce-1af0-4bd4-b0f9-1365f7f9b0f5", 00:11:04.699 "is_configured": false, 00:11:04.699 "data_offset": 0, 00:11:04.699 "data_size": 63488 00:11:04.699 }, 00:11:04.699 { 00:11:04.699 "name": null, 00:11:04.699 "uuid": "2ea7a06a-20e4-4ed1-8d79-ee26f1ff240d", 00:11:04.699 "is_configured": false, 00:11:04.699 "data_offset": 0, 00:11:04.699 "data_size": 63488 00:11:04.699 }, 00:11:04.699 { 00:11:04.699 "name": "BaseBdev3", 00:11:04.699 "uuid": "3dbc1887-a2ea-4d16-841d-a896df030b23", 00:11:04.699 "is_configured": true, 00:11:04.699 "data_offset": 2048, 00:11:04.699 "data_size": 63488 00:11:04.699 } 00:11:04.699 ] 00:11:04.699 }' 00:11:04.699 16:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.699 16:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.268 16:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.268 16:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.268 16:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.268 16:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:05.268 16:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.268 16:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:05.268 16:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:05.268 16:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.268 16:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.268 [2024-11-05 16:25:18.105083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:05.268 16:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.268 16:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:05.268 16:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.268 16:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.268 16:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.268 16:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.268 16:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:05.268 16:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.268 16:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.268 16:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.268 16:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.268 16:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.268 16:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.268 16:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.268 16:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.268 16:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.268 16:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.268 "name": "Existed_Raid", 00:11:05.268 "uuid": "5e58f8e0-e9a1-4695-888e-2e6740b6f1ae", 00:11:05.268 "strip_size_kb": 64, 00:11:05.268 "state": "configuring", 00:11:05.268 "raid_level": "concat", 00:11:05.268 "superblock": true, 00:11:05.268 "num_base_bdevs": 3, 00:11:05.268 "num_base_bdevs_discovered": 2, 00:11:05.268 "num_base_bdevs_operational": 3, 00:11:05.268 "base_bdevs_list": [ 00:11:05.268 { 00:11:05.268 "name": null, 00:11:05.268 "uuid": "a7048dce-1af0-4bd4-b0f9-1365f7f9b0f5", 00:11:05.268 "is_configured": false, 00:11:05.268 "data_offset": 0, 00:11:05.268 "data_size": 63488 00:11:05.268 }, 00:11:05.268 { 00:11:05.268 "name": "BaseBdev2", 00:11:05.268 "uuid": "2ea7a06a-20e4-4ed1-8d79-ee26f1ff240d", 00:11:05.268 "is_configured": true, 00:11:05.268 "data_offset": 2048, 00:11:05.268 "data_size": 63488 00:11:05.268 }, 00:11:05.268 { 00:11:05.268 "name": "BaseBdev3", 00:11:05.268 "uuid": "3dbc1887-a2ea-4d16-841d-a896df030b23", 00:11:05.268 "is_configured": true, 00:11:05.268 "data_offset": 2048, 00:11:05.268 "data_size": 63488 00:11:05.268 } 00:11:05.268 ] 00:11:05.268 }' 00:11:05.268 16:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.268 16:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.527 16:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.527 16:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.527 16:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.527 16:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:05.527 16:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.527 16:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:05.527 16:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.527 16:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:05.527 16:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.527 16:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.527 16:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.527 16:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a7048dce-1af0-4bd4-b0f9-1365f7f9b0f5 00:11:05.527 16:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.527 16:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.527 NewBaseBdev 00:11:05.527 [2024-11-05 16:25:18.598672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:05.527 [2024-11-05 16:25:18.598909] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:05.527 [2024-11-05 16:25:18.598926] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:05.527 [2024-11-05 16:25:18.599190] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:05.527 [2024-11-05 16:25:18.599339] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:05.527 [2024-11-05 16:25:18.599354] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:05.527 [2024-11-05 16:25:18.599510] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.527 16:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.527 16:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:05.527 16:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:11:05.527 16:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:05.527 16:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:05.527 16:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:05.527 16:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:05.527 16:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:05.527 16:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.527 16:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.527 16:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.527 16:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:05.527 16:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.527 16:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.787 [ 00:11:05.787 { 00:11:05.787 "name": "NewBaseBdev", 00:11:05.787 "aliases": [ 00:11:05.787 "a7048dce-1af0-4bd4-b0f9-1365f7f9b0f5" 00:11:05.787 ], 00:11:05.787 "product_name": "Malloc disk", 00:11:05.787 "block_size": 512, 00:11:05.787 "num_blocks": 65536, 00:11:05.787 "uuid": "a7048dce-1af0-4bd4-b0f9-1365f7f9b0f5", 00:11:05.787 "assigned_rate_limits": { 00:11:05.787 "rw_ios_per_sec": 0, 00:11:05.787 "rw_mbytes_per_sec": 0, 00:11:05.787 "r_mbytes_per_sec": 0, 00:11:05.787 "w_mbytes_per_sec": 0 00:11:05.787 }, 00:11:05.787 "claimed": true, 00:11:05.787 "claim_type": "exclusive_write", 00:11:05.787 "zoned": false, 00:11:05.787 "supported_io_types": { 00:11:05.787 "read": true, 00:11:05.787 "write": true, 00:11:05.787 "unmap": true, 00:11:05.787 "flush": true, 00:11:05.787 "reset": true, 00:11:05.787 "nvme_admin": false, 00:11:05.787 "nvme_io": false, 00:11:05.787 "nvme_io_md": false, 00:11:05.787 "write_zeroes": true, 00:11:05.787 "zcopy": true, 00:11:05.787 "get_zone_info": false, 00:11:05.787 "zone_management": false, 00:11:05.787 "zone_append": false, 00:11:05.787 "compare": false, 00:11:05.787 "compare_and_write": false, 00:11:05.787 "abort": true, 00:11:05.787 "seek_hole": false, 00:11:05.787 "seek_data": false, 00:11:05.787 "copy": true, 00:11:05.787 "nvme_iov_md": false 00:11:05.787 }, 00:11:05.787 "memory_domains": [ 00:11:05.787 { 00:11:05.787 "dma_device_id": "system", 00:11:05.787 "dma_device_type": 1 00:11:05.787 }, 00:11:05.787 { 00:11:05.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.787 "dma_device_type": 2 00:11:05.787 } 00:11:05.787 ], 00:11:05.787 "driver_specific": {} 00:11:05.787 } 00:11:05.787 ] 00:11:05.787 16:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.787 16:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:05.787 16:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:05.787 16:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.787 16:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.787 16:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.787 16:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.787 16:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:05.787 16:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.787 16:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.787 16:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.787 16:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.787 16:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.787 16:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.787 16:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.787 16:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.787 16:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.787 16:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.787 "name": "Existed_Raid", 00:11:05.787 "uuid": "5e58f8e0-e9a1-4695-888e-2e6740b6f1ae", 00:11:05.787 "strip_size_kb": 64, 00:11:05.787 "state": "online", 00:11:05.787 "raid_level": "concat", 00:11:05.787 "superblock": true, 00:11:05.787 "num_base_bdevs": 3, 00:11:05.787 "num_base_bdevs_discovered": 3, 00:11:05.787 "num_base_bdevs_operational": 3, 00:11:05.787 "base_bdevs_list": [ 00:11:05.787 { 00:11:05.787 "name": "NewBaseBdev", 00:11:05.787 "uuid": "a7048dce-1af0-4bd4-b0f9-1365f7f9b0f5", 00:11:05.787 "is_configured": true, 00:11:05.787 "data_offset": 2048, 00:11:05.787 "data_size": 63488 00:11:05.787 }, 00:11:05.787 { 00:11:05.787 "name": "BaseBdev2", 00:11:05.787 "uuid": "2ea7a06a-20e4-4ed1-8d79-ee26f1ff240d", 00:11:05.787 "is_configured": true, 00:11:05.787 "data_offset": 2048, 00:11:05.787 "data_size": 63488 00:11:05.787 }, 00:11:05.787 { 00:11:05.787 "name": "BaseBdev3", 00:11:05.787 "uuid": "3dbc1887-a2ea-4d16-841d-a896df030b23", 00:11:05.787 "is_configured": true, 00:11:05.787 "data_offset": 2048, 00:11:05.787 "data_size": 63488 00:11:05.787 } 00:11:05.787 ] 00:11:05.787 }' 00:11:05.787 16:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.787 16:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.047 16:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:06.047 16:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:06.047 16:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:06.047 16:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:06.047 16:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:06.047 16:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:06.047 16:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:06.047 16:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:06.047 16:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.047 16:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.047 [2024-11-05 16:25:19.110374] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:06.047 16:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.310 16:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:06.310 "name": "Existed_Raid", 00:11:06.310 "aliases": [ 00:11:06.310 "5e58f8e0-e9a1-4695-888e-2e6740b6f1ae" 00:11:06.310 ], 00:11:06.310 "product_name": "Raid Volume", 00:11:06.310 "block_size": 512, 00:11:06.310 "num_blocks": 190464, 00:11:06.310 "uuid": "5e58f8e0-e9a1-4695-888e-2e6740b6f1ae", 00:11:06.310 "assigned_rate_limits": { 00:11:06.310 "rw_ios_per_sec": 0, 00:11:06.310 "rw_mbytes_per_sec": 0, 00:11:06.310 "r_mbytes_per_sec": 0, 00:11:06.310 "w_mbytes_per_sec": 0 00:11:06.310 }, 00:11:06.310 "claimed": false, 00:11:06.310 "zoned": false, 00:11:06.310 "supported_io_types": { 00:11:06.310 "read": true, 00:11:06.310 "write": true, 00:11:06.310 "unmap": true, 00:11:06.310 "flush": true, 00:11:06.310 "reset": true, 00:11:06.310 "nvme_admin": false, 00:11:06.310 "nvme_io": false, 00:11:06.310 "nvme_io_md": false, 00:11:06.310 "write_zeroes": true, 00:11:06.310 "zcopy": false, 00:11:06.310 "get_zone_info": false, 00:11:06.310 "zone_management": false, 00:11:06.310 "zone_append": false, 00:11:06.310 "compare": false, 00:11:06.310 "compare_and_write": false, 00:11:06.310 "abort": false, 00:11:06.310 "seek_hole": false, 00:11:06.310 "seek_data": false, 00:11:06.310 "copy": false, 00:11:06.310 "nvme_iov_md": false 00:11:06.310 }, 00:11:06.310 "memory_domains": [ 00:11:06.310 { 00:11:06.310 "dma_device_id": "system", 00:11:06.310 "dma_device_type": 1 00:11:06.310 }, 00:11:06.310 { 00:11:06.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.310 "dma_device_type": 2 00:11:06.310 }, 00:11:06.310 { 00:11:06.310 "dma_device_id": "system", 00:11:06.310 "dma_device_type": 1 00:11:06.310 }, 00:11:06.310 { 00:11:06.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.310 "dma_device_type": 2 00:11:06.310 }, 00:11:06.310 { 00:11:06.310 "dma_device_id": "system", 00:11:06.310 "dma_device_type": 1 00:11:06.310 }, 00:11:06.310 { 00:11:06.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.310 "dma_device_type": 2 00:11:06.310 } 00:11:06.310 ], 00:11:06.310 "driver_specific": { 00:11:06.310 "raid": { 00:11:06.310 "uuid": "5e58f8e0-e9a1-4695-888e-2e6740b6f1ae", 00:11:06.310 "strip_size_kb": 64, 00:11:06.310 "state": "online", 00:11:06.310 "raid_level": "concat", 00:11:06.310 "superblock": true, 00:11:06.310 "num_base_bdevs": 3, 00:11:06.310 "num_base_bdevs_discovered": 3, 00:11:06.310 "num_base_bdevs_operational": 3, 00:11:06.310 "base_bdevs_list": [ 00:11:06.310 { 00:11:06.310 "name": "NewBaseBdev", 00:11:06.310 "uuid": "a7048dce-1af0-4bd4-b0f9-1365f7f9b0f5", 00:11:06.310 "is_configured": true, 00:11:06.310 "data_offset": 2048, 00:11:06.310 "data_size": 63488 00:11:06.310 }, 00:11:06.310 { 00:11:06.310 "name": "BaseBdev2", 00:11:06.310 "uuid": "2ea7a06a-20e4-4ed1-8d79-ee26f1ff240d", 00:11:06.310 "is_configured": true, 00:11:06.310 "data_offset": 2048, 00:11:06.310 "data_size": 63488 00:11:06.310 }, 00:11:06.310 { 00:11:06.310 "name": "BaseBdev3", 00:11:06.310 "uuid": "3dbc1887-a2ea-4d16-841d-a896df030b23", 00:11:06.310 "is_configured": true, 00:11:06.310 "data_offset": 2048, 00:11:06.310 "data_size": 63488 00:11:06.310 } 00:11:06.310 ] 00:11:06.310 } 00:11:06.310 } 00:11:06.310 }' 00:11:06.310 16:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:06.310 16:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:06.310 BaseBdev2 00:11:06.310 BaseBdev3' 00:11:06.310 16:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.310 16:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:06.310 16:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.310 16:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:06.310 16:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.310 16:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.310 16:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.310 16:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.310 16:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.310 16:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.310 16:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.310 16:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:06.310 16:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.310 16:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.310 16:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.310 16:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.310 16:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.310 16:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.310 16:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.310 16:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:06.310 16:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.310 16:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.310 16:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.310 16:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.310 16:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.310 16:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.310 16:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:06.310 16:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.310 16:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.310 [2024-11-05 16:25:19.349766] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:06.310 [2024-11-05 16:25:19.349924] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:06.310 [2024-11-05 16:25:19.350074] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:06.310 [2024-11-05 16:25:19.350158] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:06.311 [2024-11-05 16:25:19.350177] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:06.311 16:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.311 16:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66486 00:11:06.311 16:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 66486 ']' 00:11:06.311 16:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 66486 00:11:06.311 16:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:11:06.311 16:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:06.311 16:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66486 00:11:06.311 killing process with pid 66486 00:11:06.311 16:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:06.311 16:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:06.311 16:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66486' 00:11:06.311 16:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 66486 00:11:06.311 16:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 66486 00:11:06.311 [2024-11-05 16:25:19.386950] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:06.879 [2024-11-05 16:25:19.776830] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:08.260 16:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:08.260 00:11:08.260 real 0m10.949s 00:11:08.260 user 0m17.278s 00:11:08.260 sys 0m1.829s 00:11:08.260 ************************************ 00:11:08.260 END TEST raid_state_function_test_sb 00:11:08.260 ************************************ 00:11:08.260 16:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:08.260 16:25:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.260 16:25:21 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:11:08.260 16:25:21 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:08.260 16:25:21 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:08.260 16:25:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:08.260 ************************************ 00:11:08.260 START TEST raid_superblock_test 00:11:08.260 ************************************ 00:11:08.260 16:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 3 00:11:08.260 16:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:08.260 16:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:11:08.260 16:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:08.260 16:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:08.260 16:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:08.260 16:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:08.260 16:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:08.260 16:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:08.260 16:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:08.260 16:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:08.260 16:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:08.260 16:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:08.260 16:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:08.260 16:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:08.260 16:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:08.260 16:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:08.260 16:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67115 00:11:08.260 16:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:08.260 16:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67115 00:11:08.260 16:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 67115 ']' 00:11:08.260 16:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.260 16:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:08.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.260 16:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.260 16:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:08.260 16:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.260 [2024-11-05 16:25:21.211724] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:11:08.260 [2024-11-05 16:25:21.211947] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67115 ] 00:11:08.519 [2024-11-05 16:25:21.372255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.519 [2024-11-05 16:25:21.507704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.779 [2024-11-05 16:25:21.722416] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:08.779 [2024-11-05 16:25:21.722608] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:09.038 16:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:09.038 16:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:11:09.038 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:09.038 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:09.038 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:09.038 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:09.038 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:09.038 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:09.038 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:09.038 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:09.038 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:09.038 16:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.038 16:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.297 malloc1 00:11:09.297 16:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.297 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:09.297 16:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.297 16:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.297 [2024-11-05 16:25:22.147566] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:09.297 [2024-11-05 16:25:22.147683] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.297 [2024-11-05 16:25:22.147745] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:09.297 [2024-11-05 16:25:22.147780] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.297 [2024-11-05 16:25:22.150215] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.297 [2024-11-05 16:25:22.150299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:09.297 pt1 00:11:09.297 16:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.297 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:09.297 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:09.297 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:09.297 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:09.297 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:09.297 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:09.297 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:09.297 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:09.297 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:09.297 16:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.297 16:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.297 malloc2 00:11:09.297 16:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.297 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:09.297 16:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.297 16:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.298 [2024-11-05 16:25:22.206855] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:09.298 [2024-11-05 16:25:22.206979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.298 [2024-11-05 16:25:22.207038] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:09.298 [2024-11-05 16:25:22.207074] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.298 [2024-11-05 16:25:22.209648] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.298 [2024-11-05 16:25:22.209730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:09.298 pt2 00:11:09.298 16:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.298 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:09.298 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:09.298 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:09.298 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:09.298 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:09.298 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:09.298 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:09.298 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:09.298 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:09.298 16:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.298 16:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.298 malloc3 00:11:09.298 16:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.298 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:09.298 16:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.298 16:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.298 [2024-11-05 16:25:22.277376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:09.298 [2024-11-05 16:25:22.277501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.298 [2024-11-05 16:25:22.277566] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:09.298 [2024-11-05 16:25:22.277606] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.298 [2024-11-05 16:25:22.280170] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.298 [2024-11-05 16:25:22.280260] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:09.298 pt3 00:11:09.298 16:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.298 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:09.298 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:09.298 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:11:09.298 16:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.298 16:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.298 [2024-11-05 16:25:22.289425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:09.298 [2024-11-05 16:25:22.291544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:09.298 [2024-11-05 16:25:22.291663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:09.298 [2024-11-05 16:25:22.291869] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:09.298 [2024-11-05 16:25:22.291925] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:09.298 [2024-11-05 16:25:22.292245] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:09.298 [2024-11-05 16:25:22.292434] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:09.298 [2024-11-05 16:25:22.292446] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:09.298 [2024-11-05 16:25:22.292661] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.298 16:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.298 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:09.298 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:09.298 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:09.298 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:09.298 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.298 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:09.298 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.298 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.298 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.298 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.298 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.298 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.298 16:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.298 16:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.298 16:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.298 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.298 "name": "raid_bdev1", 00:11:09.298 "uuid": "1e3ec108-8042-40a7-a971-b05fcb39c71d", 00:11:09.298 "strip_size_kb": 64, 00:11:09.298 "state": "online", 00:11:09.298 "raid_level": "concat", 00:11:09.298 "superblock": true, 00:11:09.298 "num_base_bdevs": 3, 00:11:09.298 "num_base_bdevs_discovered": 3, 00:11:09.298 "num_base_bdevs_operational": 3, 00:11:09.298 "base_bdevs_list": [ 00:11:09.298 { 00:11:09.298 "name": "pt1", 00:11:09.298 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:09.298 "is_configured": true, 00:11:09.298 "data_offset": 2048, 00:11:09.298 "data_size": 63488 00:11:09.298 }, 00:11:09.298 { 00:11:09.298 "name": "pt2", 00:11:09.298 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:09.298 "is_configured": true, 00:11:09.298 "data_offset": 2048, 00:11:09.298 "data_size": 63488 00:11:09.298 }, 00:11:09.298 { 00:11:09.298 "name": "pt3", 00:11:09.298 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:09.298 "is_configured": true, 00:11:09.298 "data_offset": 2048, 00:11:09.298 "data_size": 63488 00:11:09.298 } 00:11:09.298 ] 00:11:09.298 }' 00:11:09.298 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.298 16:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.868 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:09.868 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:09.868 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:09.868 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:09.868 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:09.868 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:09.868 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:09.868 16:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.868 16:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.868 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:09.868 [2024-11-05 16:25:22.756987] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:09.868 16:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.868 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:09.868 "name": "raid_bdev1", 00:11:09.868 "aliases": [ 00:11:09.868 "1e3ec108-8042-40a7-a971-b05fcb39c71d" 00:11:09.868 ], 00:11:09.868 "product_name": "Raid Volume", 00:11:09.868 "block_size": 512, 00:11:09.868 "num_blocks": 190464, 00:11:09.868 "uuid": "1e3ec108-8042-40a7-a971-b05fcb39c71d", 00:11:09.868 "assigned_rate_limits": { 00:11:09.868 "rw_ios_per_sec": 0, 00:11:09.868 "rw_mbytes_per_sec": 0, 00:11:09.868 "r_mbytes_per_sec": 0, 00:11:09.868 "w_mbytes_per_sec": 0 00:11:09.868 }, 00:11:09.868 "claimed": false, 00:11:09.868 "zoned": false, 00:11:09.868 "supported_io_types": { 00:11:09.868 "read": true, 00:11:09.868 "write": true, 00:11:09.868 "unmap": true, 00:11:09.868 "flush": true, 00:11:09.868 "reset": true, 00:11:09.868 "nvme_admin": false, 00:11:09.868 "nvme_io": false, 00:11:09.868 "nvme_io_md": false, 00:11:09.868 "write_zeroes": true, 00:11:09.868 "zcopy": false, 00:11:09.868 "get_zone_info": false, 00:11:09.868 "zone_management": false, 00:11:09.868 "zone_append": false, 00:11:09.868 "compare": false, 00:11:09.868 "compare_and_write": false, 00:11:09.868 "abort": false, 00:11:09.868 "seek_hole": false, 00:11:09.868 "seek_data": false, 00:11:09.868 "copy": false, 00:11:09.868 "nvme_iov_md": false 00:11:09.868 }, 00:11:09.868 "memory_domains": [ 00:11:09.868 { 00:11:09.868 "dma_device_id": "system", 00:11:09.868 "dma_device_type": 1 00:11:09.868 }, 00:11:09.868 { 00:11:09.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.868 "dma_device_type": 2 00:11:09.868 }, 00:11:09.868 { 00:11:09.868 "dma_device_id": "system", 00:11:09.868 "dma_device_type": 1 00:11:09.868 }, 00:11:09.868 { 00:11:09.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.868 "dma_device_type": 2 00:11:09.868 }, 00:11:09.868 { 00:11:09.868 "dma_device_id": "system", 00:11:09.868 "dma_device_type": 1 00:11:09.868 }, 00:11:09.868 { 00:11:09.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.868 "dma_device_type": 2 00:11:09.868 } 00:11:09.868 ], 00:11:09.868 "driver_specific": { 00:11:09.868 "raid": { 00:11:09.868 "uuid": "1e3ec108-8042-40a7-a971-b05fcb39c71d", 00:11:09.868 "strip_size_kb": 64, 00:11:09.868 "state": "online", 00:11:09.868 "raid_level": "concat", 00:11:09.868 "superblock": true, 00:11:09.868 "num_base_bdevs": 3, 00:11:09.868 "num_base_bdevs_discovered": 3, 00:11:09.868 "num_base_bdevs_operational": 3, 00:11:09.868 "base_bdevs_list": [ 00:11:09.868 { 00:11:09.868 "name": "pt1", 00:11:09.868 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:09.868 "is_configured": true, 00:11:09.868 "data_offset": 2048, 00:11:09.868 "data_size": 63488 00:11:09.868 }, 00:11:09.868 { 00:11:09.868 "name": "pt2", 00:11:09.869 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:09.869 "is_configured": true, 00:11:09.869 "data_offset": 2048, 00:11:09.869 "data_size": 63488 00:11:09.869 }, 00:11:09.869 { 00:11:09.869 "name": "pt3", 00:11:09.869 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:09.869 "is_configured": true, 00:11:09.869 "data_offset": 2048, 00:11:09.869 "data_size": 63488 00:11:09.869 } 00:11:09.869 ] 00:11:09.869 } 00:11:09.869 } 00:11:09.869 }' 00:11:09.869 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:09.869 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:09.869 pt2 00:11:09.869 pt3' 00:11:09.869 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.869 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:09.869 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.869 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.869 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:09.869 16:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.869 16:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.869 16:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.869 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.869 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.869 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.869 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.869 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:09.869 16:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.869 16:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.869 16:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.869 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.869 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.869 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.869 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:09.869 16:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.869 16:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.869 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.869 16:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.136 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.136 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.136 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:10.136 16:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:10.136 16:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.136 16:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.136 [2024-11-05 16:25:23.000483] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1e3ec108-8042-40a7-a971-b05fcb39c71d 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1e3ec108-8042-40a7-a971-b05fcb39c71d ']' 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.136 [2024-11-05 16:25:23.048081] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:10.136 [2024-11-05 16:25:23.048116] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:10.136 [2024-11-05 16:25:23.048213] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:10.136 [2024-11-05 16:25:23.048279] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:10.136 [2024-11-05 16:25:23.048290] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.136 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.136 [2024-11-05 16:25:23.199918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:10.136 [2024-11-05 16:25:23.201980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:10.136 [2024-11-05 16:25:23.202096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:10.136 [2024-11-05 16:25:23.202162] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:10.137 [2024-11-05 16:25:23.202224] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:10.137 [2024-11-05 16:25:23.202245] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:10.137 [2024-11-05 16:25:23.202265] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:10.137 [2024-11-05 16:25:23.202276] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:10.137 request: 00:11:10.137 { 00:11:10.137 "name": "raid_bdev1", 00:11:10.137 "raid_level": "concat", 00:11:10.137 "base_bdevs": [ 00:11:10.137 "malloc1", 00:11:10.137 "malloc2", 00:11:10.137 "malloc3" 00:11:10.137 ], 00:11:10.137 "strip_size_kb": 64, 00:11:10.137 "superblock": false, 00:11:10.137 "method": "bdev_raid_create", 00:11:10.137 "req_id": 1 00:11:10.137 } 00:11:10.137 Got JSON-RPC error response 00:11:10.137 response: 00:11:10.137 { 00:11:10.137 "code": -17, 00:11:10.137 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:10.137 } 00:11:10.137 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:10.137 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:10.137 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:10.137 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:10.137 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:10.137 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:10.137 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.137 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.137 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.410 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.410 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:10.410 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:10.410 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:10.410 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.410 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.410 [2024-11-05 16:25:23.255734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:10.410 [2024-11-05 16:25:23.255853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.410 [2024-11-05 16:25:23.255905] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:10.410 [2024-11-05 16:25:23.255940] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.410 [2024-11-05 16:25:23.258527] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.410 [2024-11-05 16:25:23.258618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:10.410 [2024-11-05 16:25:23.258752] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:10.410 [2024-11-05 16:25:23.258853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:10.410 pt1 00:11:10.410 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.410 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:10.410 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:10.410 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.410 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.410 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.410 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:10.410 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.410 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.410 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.410 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.410 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.410 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.410 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.410 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.410 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.410 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.410 "name": "raid_bdev1", 00:11:10.410 "uuid": "1e3ec108-8042-40a7-a971-b05fcb39c71d", 00:11:10.410 "strip_size_kb": 64, 00:11:10.410 "state": "configuring", 00:11:10.410 "raid_level": "concat", 00:11:10.410 "superblock": true, 00:11:10.410 "num_base_bdevs": 3, 00:11:10.410 "num_base_bdevs_discovered": 1, 00:11:10.410 "num_base_bdevs_operational": 3, 00:11:10.410 "base_bdevs_list": [ 00:11:10.410 { 00:11:10.410 "name": "pt1", 00:11:10.410 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:10.410 "is_configured": true, 00:11:10.410 "data_offset": 2048, 00:11:10.410 "data_size": 63488 00:11:10.410 }, 00:11:10.410 { 00:11:10.410 "name": null, 00:11:10.410 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:10.410 "is_configured": false, 00:11:10.410 "data_offset": 2048, 00:11:10.410 "data_size": 63488 00:11:10.410 }, 00:11:10.410 { 00:11:10.410 "name": null, 00:11:10.410 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:10.410 "is_configured": false, 00:11:10.410 "data_offset": 2048, 00:11:10.410 "data_size": 63488 00:11:10.410 } 00:11:10.410 ] 00:11:10.410 }' 00:11:10.410 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.410 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.670 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:10.670 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:10.670 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.670 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.670 [2024-11-05 16:25:23.683030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:10.670 [2024-11-05 16:25:23.683109] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.670 [2024-11-05 16:25:23.683136] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:10.670 [2024-11-05 16:25:23.683147] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.670 [2024-11-05 16:25:23.683668] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.670 [2024-11-05 16:25:23.683687] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:10.670 [2024-11-05 16:25:23.683787] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:10.670 [2024-11-05 16:25:23.683810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:10.670 pt2 00:11:10.670 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.670 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:10.670 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.670 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.670 [2024-11-05 16:25:23.691019] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:10.670 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.670 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:10.670 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:10.670 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.670 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.670 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.670 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:10.670 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.670 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.670 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.670 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.670 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.670 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.670 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.670 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.670 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.670 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.670 "name": "raid_bdev1", 00:11:10.670 "uuid": "1e3ec108-8042-40a7-a971-b05fcb39c71d", 00:11:10.670 "strip_size_kb": 64, 00:11:10.670 "state": "configuring", 00:11:10.670 "raid_level": "concat", 00:11:10.670 "superblock": true, 00:11:10.670 "num_base_bdevs": 3, 00:11:10.670 "num_base_bdevs_discovered": 1, 00:11:10.670 "num_base_bdevs_operational": 3, 00:11:10.670 "base_bdevs_list": [ 00:11:10.670 { 00:11:10.670 "name": "pt1", 00:11:10.670 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:10.670 "is_configured": true, 00:11:10.670 "data_offset": 2048, 00:11:10.670 "data_size": 63488 00:11:10.670 }, 00:11:10.670 { 00:11:10.670 "name": null, 00:11:10.670 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:10.670 "is_configured": false, 00:11:10.670 "data_offset": 0, 00:11:10.670 "data_size": 63488 00:11:10.670 }, 00:11:10.670 { 00:11:10.670 "name": null, 00:11:10.670 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:10.670 "is_configured": false, 00:11:10.670 "data_offset": 2048, 00:11:10.670 "data_size": 63488 00:11:10.670 } 00:11:10.670 ] 00:11:10.670 }' 00:11:10.670 16:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.670 16:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.239 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:11.239 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:11.239 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:11.239 16:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.239 16:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.239 [2024-11-05 16:25:24.174181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:11.239 [2024-11-05 16:25:24.174336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.239 [2024-11-05 16:25:24.174378] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:11.239 [2024-11-05 16:25:24.174414] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.239 [2024-11-05 16:25:24.174964] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.239 [2024-11-05 16:25:24.175037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:11.239 [2024-11-05 16:25:24.175163] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:11.239 [2024-11-05 16:25:24.175238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:11.239 pt2 00:11:11.239 16:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.239 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:11.239 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:11.239 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:11.239 16:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.239 16:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.239 [2024-11-05 16:25:24.186145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:11.239 [2024-11-05 16:25:24.186245] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.239 [2024-11-05 16:25:24.186289] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:11.239 [2024-11-05 16:25:24.186323] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.239 [2024-11-05 16:25:24.186826] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.239 [2024-11-05 16:25:24.186898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:11.239 [2024-11-05 16:25:24.187016] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:11.239 [2024-11-05 16:25:24.187074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:11.239 [2024-11-05 16:25:24.187240] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:11.239 [2024-11-05 16:25:24.187285] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:11.239 [2024-11-05 16:25:24.187601] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:11.239 [2024-11-05 16:25:24.187793] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:11.239 [2024-11-05 16:25:24.187835] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:11.239 [2024-11-05 16:25:24.188058] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:11.239 pt3 00:11:11.240 16:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.240 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:11.240 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:11.240 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:11.240 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:11.240 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:11.240 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:11.240 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.240 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:11.240 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.240 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.240 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.240 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.240 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.240 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.240 16:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.240 16:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.240 16:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.240 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.240 "name": "raid_bdev1", 00:11:11.240 "uuid": "1e3ec108-8042-40a7-a971-b05fcb39c71d", 00:11:11.240 "strip_size_kb": 64, 00:11:11.240 "state": "online", 00:11:11.240 "raid_level": "concat", 00:11:11.240 "superblock": true, 00:11:11.240 "num_base_bdevs": 3, 00:11:11.240 "num_base_bdevs_discovered": 3, 00:11:11.240 "num_base_bdevs_operational": 3, 00:11:11.240 "base_bdevs_list": [ 00:11:11.240 { 00:11:11.240 "name": "pt1", 00:11:11.240 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:11.240 "is_configured": true, 00:11:11.240 "data_offset": 2048, 00:11:11.240 "data_size": 63488 00:11:11.240 }, 00:11:11.240 { 00:11:11.240 "name": "pt2", 00:11:11.240 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:11.240 "is_configured": true, 00:11:11.240 "data_offset": 2048, 00:11:11.240 "data_size": 63488 00:11:11.240 }, 00:11:11.240 { 00:11:11.240 "name": "pt3", 00:11:11.240 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:11.240 "is_configured": true, 00:11:11.240 "data_offset": 2048, 00:11:11.240 "data_size": 63488 00:11:11.240 } 00:11:11.240 ] 00:11:11.240 }' 00:11:11.240 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.240 16:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.809 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:11.809 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:11.810 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:11.810 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:11.810 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:11.810 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:11.810 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:11.810 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:11.810 16:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.810 16:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.810 [2024-11-05 16:25:24.689723] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:11.810 16:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.810 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:11.810 "name": "raid_bdev1", 00:11:11.810 "aliases": [ 00:11:11.810 "1e3ec108-8042-40a7-a971-b05fcb39c71d" 00:11:11.810 ], 00:11:11.810 "product_name": "Raid Volume", 00:11:11.810 "block_size": 512, 00:11:11.810 "num_blocks": 190464, 00:11:11.810 "uuid": "1e3ec108-8042-40a7-a971-b05fcb39c71d", 00:11:11.810 "assigned_rate_limits": { 00:11:11.810 "rw_ios_per_sec": 0, 00:11:11.810 "rw_mbytes_per_sec": 0, 00:11:11.810 "r_mbytes_per_sec": 0, 00:11:11.810 "w_mbytes_per_sec": 0 00:11:11.810 }, 00:11:11.810 "claimed": false, 00:11:11.810 "zoned": false, 00:11:11.810 "supported_io_types": { 00:11:11.810 "read": true, 00:11:11.810 "write": true, 00:11:11.810 "unmap": true, 00:11:11.810 "flush": true, 00:11:11.810 "reset": true, 00:11:11.810 "nvme_admin": false, 00:11:11.810 "nvme_io": false, 00:11:11.810 "nvme_io_md": false, 00:11:11.810 "write_zeroes": true, 00:11:11.810 "zcopy": false, 00:11:11.810 "get_zone_info": false, 00:11:11.810 "zone_management": false, 00:11:11.810 "zone_append": false, 00:11:11.810 "compare": false, 00:11:11.810 "compare_and_write": false, 00:11:11.810 "abort": false, 00:11:11.810 "seek_hole": false, 00:11:11.810 "seek_data": false, 00:11:11.810 "copy": false, 00:11:11.810 "nvme_iov_md": false 00:11:11.810 }, 00:11:11.810 "memory_domains": [ 00:11:11.810 { 00:11:11.810 "dma_device_id": "system", 00:11:11.810 "dma_device_type": 1 00:11:11.810 }, 00:11:11.810 { 00:11:11.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.810 "dma_device_type": 2 00:11:11.810 }, 00:11:11.810 { 00:11:11.810 "dma_device_id": "system", 00:11:11.810 "dma_device_type": 1 00:11:11.810 }, 00:11:11.810 { 00:11:11.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.810 "dma_device_type": 2 00:11:11.810 }, 00:11:11.810 { 00:11:11.810 "dma_device_id": "system", 00:11:11.810 "dma_device_type": 1 00:11:11.810 }, 00:11:11.810 { 00:11:11.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.810 "dma_device_type": 2 00:11:11.810 } 00:11:11.810 ], 00:11:11.810 "driver_specific": { 00:11:11.810 "raid": { 00:11:11.810 "uuid": "1e3ec108-8042-40a7-a971-b05fcb39c71d", 00:11:11.810 "strip_size_kb": 64, 00:11:11.810 "state": "online", 00:11:11.810 "raid_level": "concat", 00:11:11.810 "superblock": true, 00:11:11.810 "num_base_bdevs": 3, 00:11:11.810 "num_base_bdevs_discovered": 3, 00:11:11.810 "num_base_bdevs_operational": 3, 00:11:11.810 "base_bdevs_list": [ 00:11:11.810 { 00:11:11.810 "name": "pt1", 00:11:11.810 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:11.810 "is_configured": true, 00:11:11.810 "data_offset": 2048, 00:11:11.810 "data_size": 63488 00:11:11.810 }, 00:11:11.810 { 00:11:11.810 "name": "pt2", 00:11:11.810 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:11.810 "is_configured": true, 00:11:11.810 "data_offset": 2048, 00:11:11.810 "data_size": 63488 00:11:11.810 }, 00:11:11.810 { 00:11:11.810 "name": "pt3", 00:11:11.810 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:11.810 "is_configured": true, 00:11:11.810 "data_offset": 2048, 00:11:11.810 "data_size": 63488 00:11:11.810 } 00:11:11.810 ] 00:11:11.810 } 00:11:11.810 } 00:11:11.810 }' 00:11:11.810 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:11.810 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:11.810 pt2 00:11:11.810 pt3' 00:11:11.810 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.810 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:11.810 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.810 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:11.810 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.810 16:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.810 16:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.810 16:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.810 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.810 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.810 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.810 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:11.810 16:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.810 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.810 16:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.069 16:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.069 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:12.069 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:12.069 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:12.069 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:12.069 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.069 16:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.069 16:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.069 16:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.069 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:12.069 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:12.069 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:12.069 16:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:12.069 16:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.069 16:25:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.069 [2024-11-05 16:25:24.997157] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:12.069 16:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.069 16:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1e3ec108-8042-40a7-a971-b05fcb39c71d '!=' 1e3ec108-8042-40a7-a971-b05fcb39c71d ']' 00:11:12.069 16:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:12.069 16:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:12.069 16:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:12.069 16:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67115 00:11:12.069 16:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 67115 ']' 00:11:12.069 16:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 67115 00:11:12.069 16:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:11:12.069 16:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:12.069 16:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67115 00:11:12.069 16:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:12.069 16:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:12.069 16:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67115' 00:11:12.069 killing process with pid 67115 00:11:12.069 16:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 67115 00:11:12.069 [2024-11-05 16:25:25.082075] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:12.069 [2024-11-05 16:25:25.082272] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:12.069 16:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 67115 00:11:12.069 [2024-11-05 16:25:25.082377] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:12.069 [2024-11-05 16:25:25.082433] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:12.637 [2024-11-05 16:25:25.420723] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:13.573 16:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:13.573 00:11:13.573 real 0m5.548s 00:11:13.573 user 0m7.967s 00:11:13.573 sys 0m0.915s 00:11:13.573 16:25:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:13.573 16:25:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.573 ************************************ 00:11:13.573 END TEST raid_superblock_test 00:11:13.573 ************************************ 00:11:13.834 16:25:26 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:11:13.834 16:25:26 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:13.834 16:25:26 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:13.834 16:25:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:13.834 ************************************ 00:11:13.834 START TEST raid_read_error_test 00:11:13.834 ************************************ 00:11:13.834 16:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 3 read 00:11:13.834 16:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:13.834 16:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:13.834 16:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:13.834 16:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:13.834 16:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:13.834 16:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:13.834 16:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:13.834 16:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:13.834 16:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:13.834 16:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:13.834 16:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:13.834 16:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:13.834 16:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:13.834 16:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:13.834 16:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:13.834 16:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:13.834 16:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:13.834 16:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:13.834 16:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:13.834 16:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:13.834 16:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:13.834 16:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:13.834 16:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:13.834 16:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:13.834 16:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:13.834 16:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.C8g8af8UZJ 00:11:13.834 16:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67370 00:11:13.834 16:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:13.834 16:25:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67370 00:11:13.834 16:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 67370 ']' 00:11:13.835 16:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.835 16:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:13.835 16:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.835 16:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:13.835 16:25:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.835 [2024-11-05 16:25:26.833692] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:11:13.835 [2024-11-05 16:25:26.833926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67370 ] 00:11:14.094 [2024-11-05 16:25:27.017566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.094 [2024-11-05 16:25:27.147955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.358 [2024-11-05 16:25:27.366443] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:14.358 [2024-11-05 16:25:27.366519] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:14.927 16:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:14.927 16:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:11:14.927 16:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:14.927 16:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:14.927 16:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.927 16:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.927 BaseBdev1_malloc 00:11:14.927 16:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.927 16:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:14.927 16:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.927 16:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.927 true 00:11:14.927 16:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.927 16:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:14.927 16:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.927 16:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.927 [2024-11-05 16:25:27.798905] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:14.927 [2024-11-05 16:25:27.798976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.927 [2024-11-05 16:25:27.799002] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:14.927 [2024-11-05 16:25:27.799015] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.927 [2024-11-05 16:25:27.801673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.927 [2024-11-05 16:25:27.801810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:14.927 BaseBdev1 00:11:14.927 16:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.927 16:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:14.927 16:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:14.927 16:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.927 16:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.927 BaseBdev2_malloc 00:11:14.927 16:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.927 16:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:14.927 16:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.927 16:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.927 true 00:11:14.927 16:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.927 16:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:14.927 16:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.927 16:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.927 [2024-11-05 16:25:27.870310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:14.927 [2024-11-05 16:25:27.870385] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.927 [2024-11-05 16:25:27.870408] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:14.927 [2024-11-05 16:25:27.870421] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.927 [2024-11-05 16:25:27.872984] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.927 [2024-11-05 16:25:27.873031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:14.927 BaseBdev2 00:11:14.927 16:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.927 16:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:14.927 16:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:14.927 16:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.927 16:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.927 BaseBdev3_malloc 00:11:14.927 16:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.927 16:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:14.927 16:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.927 16:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.927 true 00:11:14.927 16:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.927 16:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:14.927 16:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.927 16:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.927 [2024-11-05 16:25:27.953247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:14.927 [2024-11-05 16:25:27.953317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.928 [2024-11-05 16:25:27.953342] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:14.928 [2024-11-05 16:25:27.953354] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.928 [2024-11-05 16:25:27.955814] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.928 [2024-11-05 16:25:27.955907] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:14.928 BaseBdev3 00:11:14.928 16:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.928 16:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:14.928 16:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.928 16:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.928 [2024-11-05 16:25:27.965317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:14.928 [2024-11-05 16:25:27.967476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:14.928 [2024-11-05 16:25:27.967676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:14.928 [2024-11-05 16:25:27.967928] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:14.928 [2024-11-05 16:25:27.967943] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:14.928 [2024-11-05 16:25:27.968247] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:14.928 [2024-11-05 16:25:27.968442] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:14.928 [2024-11-05 16:25:27.968457] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:14.928 [2024-11-05 16:25:27.968710] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:14.928 16:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.928 16:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:14.928 16:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:14.928 16:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:14.928 16:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.928 16:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.928 16:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:14.928 16:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.928 16:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.928 16:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.928 16:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.928 16:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.928 16:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.928 16:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.928 16:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.928 16:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.223 16:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.223 "name": "raid_bdev1", 00:11:15.223 "uuid": "452940f9-d4cc-494c-9400-5b39eca79858", 00:11:15.223 "strip_size_kb": 64, 00:11:15.223 "state": "online", 00:11:15.223 "raid_level": "concat", 00:11:15.223 "superblock": true, 00:11:15.223 "num_base_bdevs": 3, 00:11:15.223 "num_base_bdevs_discovered": 3, 00:11:15.223 "num_base_bdevs_operational": 3, 00:11:15.223 "base_bdevs_list": [ 00:11:15.223 { 00:11:15.223 "name": "BaseBdev1", 00:11:15.223 "uuid": "4358648e-a39d-5469-9ef7-c4c26a9c0e1d", 00:11:15.223 "is_configured": true, 00:11:15.223 "data_offset": 2048, 00:11:15.223 "data_size": 63488 00:11:15.223 }, 00:11:15.223 { 00:11:15.223 "name": "BaseBdev2", 00:11:15.223 "uuid": "46001ae1-8d07-56df-b7a0-c947334fd480", 00:11:15.223 "is_configured": true, 00:11:15.223 "data_offset": 2048, 00:11:15.223 "data_size": 63488 00:11:15.223 }, 00:11:15.223 { 00:11:15.223 "name": "BaseBdev3", 00:11:15.223 "uuid": "d0e11ced-8aef-5a5d-bb9b-e24b9d3a4f31", 00:11:15.223 "is_configured": true, 00:11:15.223 "data_offset": 2048, 00:11:15.223 "data_size": 63488 00:11:15.223 } 00:11:15.223 ] 00:11:15.223 }' 00:11:15.223 16:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.223 16:25:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.481 16:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:15.481 16:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:15.481 [2024-11-05 16:25:28.545805] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:16.419 16:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:16.419 16:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.419 16:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.419 16:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.419 16:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:16.419 16:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:16.419 16:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:16.419 16:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:16.419 16:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:16.419 16:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:16.419 16:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:16.419 16:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.419 16:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:16.419 16:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.419 16:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.419 16:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.419 16:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.419 16:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.419 16:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.419 16:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.419 16:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.419 16:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.419 16:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.419 "name": "raid_bdev1", 00:11:16.419 "uuid": "452940f9-d4cc-494c-9400-5b39eca79858", 00:11:16.419 "strip_size_kb": 64, 00:11:16.419 "state": "online", 00:11:16.419 "raid_level": "concat", 00:11:16.419 "superblock": true, 00:11:16.419 "num_base_bdevs": 3, 00:11:16.419 "num_base_bdevs_discovered": 3, 00:11:16.419 "num_base_bdevs_operational": 3, 00:11:16.419 "base_bdevs_list": [ 00:11:16.419 { 00:11:16.419 "name": "BaseBdev1", 00:11:16.419 "uuid": "4358648e-a39d-5469-9ef7-c4c26a9c0e1d", 00:11:16.419 "is_configured": true, 00:11:16.419 "data_offset": 2048, 00:11:16.419 "data_size": 63488 00:11:16.419 }, 00:11:16.419 { 00:11:16.419 "name": "BaseBdev2", 00:11:16.419 "uuid": "46001ae1-8d07-56df-b7a0-c947334fd480", 00:11:16.419 "is_configured": true, 00:11:16.419 "data_offset": 2048, 00:11:16.419 "data_size": 63488 00:11:16.419 }, 00:11:16.419 { 00:11:16.419 "name": "BaseBdev3", 00:11:16.419 "uuid": "d0e11ced-8aef-5a5d-bb9b-e24b9d3a4f31", 00:11:16.419 "is_configured": true, 00:11:16.419 "data_offset": 2048, 00:11:16.419 "data_size": 63488 00:11:16.419 } 00:11:16.419 ] 00:11:16.419 }' 00:11:16.419 16:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.419 16:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.989 16:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:16.990 16:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.990 16:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.990 [2024-11-05 16:25:29.894777] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:16.990 [2024-11-05 16:25:29.894814] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:16.990 [2024-11-05 16:25:29.898051] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:16.990 [2024-11-05 16:25:29.898168] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:16.990 [2024-11-05 16:25:29.898220] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:16.990 [2024-11-05 16:25:29.898236] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:16.990 { 00:11:16.990 "results": [ 00:11:16.990 { 00:11:16.990 "job": "raid_bdev1", 00:11:16.990 "core_mask": "0x1", 00:11:16.990 "workload": "randrw", 00:11:16.990 "percentage": 50, 00:11:16.990 "status": "finished", 00:11:16.990 "queue_depth": 1, 00:11:16.990 "io_size": 131072, 00:11:16.990 "runtime": 1.349433, 00:11:16.990 "iops": 13613.865971856329, 00:11:16.990 "mibps": 1701.733246482041, 00:11:16.990 "io_failed": 1, 00:11:16.990 "io_timeout": 0, 00:11:16.990 "avg_latency_us": 102.04217581909818, 00:11:16.990 "min_latency_us": 29.065502183406114, 00:11:16.990 "max_latency_us": 1752.8733624454148 00:11:16.990 } 00:11:16.990 ], 00:11:16.990 "core_count": 1 00:11:16.990 } 00:11:16.990 16:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.990 16:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67370 00:11:16.990 16:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 67370 ']' 00:11:16.990 16:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 67370 00:11:16.990 16:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:11:16.990 16:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:16.990 16:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67370 00:11:16.990 killing process with pid 67370 00:11:16.990 16:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:16.990 16:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:16.990 16:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67370' 00:11:16.990 16:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 67370 00:11:16.990 [2024-11-05 16:25:29.933974] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:16.990 16:25:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 67370 00:11:17.249 [2024-11-05 16:25:30.212046] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:18.625 16:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.C8g8af8UZJ 00:11:18.625 16:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:18.625 16:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:18.625 16:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:11:18.625 16:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:18.625 16:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:18.625 16:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:18.625 16:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:11:18.625 00:11:18.625 real 0m4.887s 00:11:18.625 user 0m5.799s 00:11:18.625 sys 0m0.587s 00:11:18.625 16:25:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:18.625 ************************************ 00:11:18.625 END TEST raid_read_error_test 00:11:18.625 ************************************ 00:11:18.625 16:25:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.625 16:25:31 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:11:18.625 16:25:31 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:18.625 16:25:31 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:18.625 16:25:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:18.625 ************************************ 00:11:18.625 START TEST raid_write_error_test 00:11:18.625 ************************************ 00:11:18.625 16:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 3 write 00:11:18.625 16:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:18.625 16:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:18.625 16:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:18.625 16:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:18.625 16:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:18.625 16:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:18.625 16:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:18.625 16:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:18.625 16:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:18.625 16:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:18.625 16:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:18.625 16:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:18.625 16:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:18.625 16:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:18.625 16:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:18.625 16:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:18.625 16:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:18.625 16:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:18.625 16:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:18.625 16:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:18.625 16:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:18.625 16:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:18.625 16:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:18.625 16:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:18.625 16:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:18.625 16:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.bqtE0E4cFZ 00:11:18.625 16:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67516 00:11:18.625 16:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:18.625 16:25:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67516 00:11:18.625 16:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 67516 ']' 00:11:18.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.625 16:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.625 16:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:18.625 16:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.625 16:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:18.625 16:25:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.884 [2024-11-05 16:25:31.797283] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:11:18.884 [2024-11-05 16:25:31.797432] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67516 ] 00:11:18.884 [2024-11-05 16:25:31.961501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.143 [2024-11-05 16:25:32.092582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.401 [2024-11-05 16:25:32.316443] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:19.401 [2024-11-05 16:25:32.316562] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:19.726 16:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:19.726 16:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:11:19.726 16:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:19.726 16:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:19.726 16:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.726 16:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.726 BaseBdev1_malloc 00:11:19.726 16:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.726 16:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:19.726 16:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.726 16:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.726 true 00:11:19.726 16:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.726 16:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:19.727 16:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.727 16:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.727 [2024-11-05 16:25:32.798485] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:19.727 [2024-11-05 16:25:32.798638] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.727 [2024-11-05 16:25:32.798687] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:19.727 [2024-11-05 16:25:32.798701] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.986 [2024-11-05 16:25:32.801174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.986 [2024-11-05 16:25:32.801224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:19.986 BaseBdev1 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.986 BaseBdev2_malloc 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.986 true 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.986 [2024-11-05 16:25:32.860690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:19.986 [2024-11-05 16:25:32.860772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.986 [2024-11-05 16:25:32.860801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:19.986 [2024-11-05 16:25:32.860819] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.986 [2024-11-05 16:25:32.863356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.986 [2024-11-05 16:25:32.863401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:19.986 BaseBdev2 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.986 BaseBdev3_malloc 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.986 true 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.986 [2024-11-05 16:25:32.936164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:19.986 [2024-11-05 16:25:32.936278] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.986 [2024-11-05 16:25:32.936304] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:19.986 [2024-11-05 16:25:32.936316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.986 [2024-11-05 16:25:32.938811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.986 [2024-11-05 16:25:32.938853] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:19.986 BaseBdev3 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.986 [2024-11-05 16:25:32.948231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:19.986 [2024-11-05 16:25:32.950379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:19.986 [2024-11-05 16:25:32.950534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:19.986 [2024-11-05 16:25:32.950799] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:19.986 [2024-11-05 16:25:32.950814] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:19.986 [2024-11-05 16:25:32.951119] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:19.986 [2024-11-05 16:25:32.951298] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:19.986 [2024-11-05 16:25:32.951313] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:19.986 [2024-11-05 16:25:32.951508] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.986 16:25:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.986 16:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.986 "name": "raid_bdev1", 00:11:19.986 "uuid": "9f97a9bd-9fa6-4496-86c3-6bd1285eb024", 00:11:19.986 "strip_size_kb": 64, 00:11:19.986 "state": "online", 00:11:19.986 "raid_level": "concat", 00:11:19.986 "superblock": true, 00:11:19.986 "num_base_bdevs": 3, 00:11:19.986 "num_base_bdevs_discovered": 3, 00:11:19.986 "num_base_bdevs_operational": 3, 00:11:19.986 "base_bdevs_list": [ 00:11:19.986 { 00:11:19.986 "name": "BaseBdev1", 00:11:19.986 "uuid": "970c50c4-9aea-56ff-ba68-adabbd82a901", 00:11:19.986 "is_configured": true, 00:11:19.986 "data_offset": 2048, 00:11:19.986 "data_size": 63488 00:11:19.986 }, 00:11:19.986 { 00:11:19.986 "name": "BaseBdev2", 00:11:19.986 "uuid": "0f5681a9-dd9e-51ca-95b5-8e8bfaf644fd", 00:11:19.986 "is_configured": true, 00:11:19.986 "data_offset": 2048, 00:11:19.986 "data_size": 63488 00:11:19.986 }, 00:11:19.986 { 00:11:19.986 "name": "BaseBdev3", 00:11:19.986 "uuid": "92889f7e-11ea-567f-8ed2-5b1f2ecf435a", 00:11:19.986 "is_configured": true, 00:11:19.986 "data_offset": 2048, 00:11:19.986 "data_size": 63488 00:11:19.987 } 00:11:19.987 ] 00:11:19.987 }' 00:11:19.987 16:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.987 16:25:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.555 16:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:20.555 16:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:20.555 [2024-11-05 16:25:33.520749] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:21.491 16:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:21.491 16:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.491 16:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.491 16:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.491 16:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:21.491 16:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:21.491 16:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:21.491 16:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:21.491 16:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:21.491 16:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:21.491 16:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:21.491 16:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.491 16:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:21.491 16:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.491 16:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.491 16:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.491 16:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.491 16:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.491 16:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.491 16:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.491 16:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.491 16:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.491 16:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.491 "name": "raid_bdev1", 00:11:21.491 "uuid": "9f97a9bd-9fa6-4496-86c3-6bd1285eb024", 00:11:21.491 "strip_size_kb": 64, 00:11:21.491 "state": "online", 00:11:21.491 "raid_level": "concat", 00:11:21.491 "superblock": true, 00:11:21.491 "num_base_bdevs": 3, 00:11:21.491 "num_base_bdevs_discovered": 3, 00:11:21.491 "num_base_bdevs_operational": 3, 00:11:21.491 "base_bdevs_list": [ 00:11:21.491 { 00:11:21.491 "name": "BaseBdev1", 00:11:21.491 "uuid": "970c50c4-9aea-56ff-ba68-adabbd82a901", 00:11:21.491 "is_configured": true, 00:11:21.491 "data_offset": 2048, 00:11:21.491 "data_size": 63488 00:11:21.491 }, 00:11:21.491 { 00:11:21.491 "name": "BaseBdev2", 00:11:21.491 "uuid": "0f5681a9-dd9e-51ca-95b5-8e8bfaf644fd", 00:11:21.491 "is_configured": true, 00:11:21.491 "data_offset": 2048, 00:11:21.491 "data_size": 63488 00:11:21.491 }, 00:11:21.491 { 00:11:21.491 "name": "BaseBdev3", 00:11:21.491 "uuid": "92889f7e-11ea-567f-8ed2-5b1f2ecf435a", 00:11:21.491 "is_configured": true, 00:11:21.491 "data_offset": 2048, 00:11:21.491 "data_size": 63488 00:11:21.491 } 00:11:21.491 ] 00:11:21.491 }' 00:11:21.491 16:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.491 16:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.060 16:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:22.060 16:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.060 16:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.060 [2024-11-05 16:25:34.910130] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:22.060 [2024-11-05 16:25:34.910252] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:22.060 [2024-11-05 16:25:34.913509] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:22.060 [2024-11-05 16:25:34.913647] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:22.060 [2024-11-05 16:25:34.913718] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:22.060 [2024-11-05 16:25:34.913779] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:22.060 { 00:11:22.060 "results": [ 00:11:22.060 { 00:11:22.060 "job": "raid_bdev1", 00:11:22.060 "core_mask": "0x1", 00:11:22.060 "workload": "randrw", 00:11:22.060 "percentage": 50, 00:11:22.060 "status": "finished", 00:11:22.060 "queue_depth": 1, 00:11:22.060 "io_size": 131072, 00:11:22.060 "runtime": 1.39005, 00:11:22.060 "iops": 13805.978202222941, 00:11:22.060 "mibps": 1725.7472752778676, 00:11:22.060 "io_failed": 1, 00:11:22.060 "io_timeout": 0, 00:11:22.060 "avg_latency_us": 100.68014802383088, 00:11:22.060 "min_latency_us": 28.05938864628821, 00:11:22.060 "max_latency_us": 1760.0279475982534 00:11:22.060 } 00:11:22.060 ], 00:11:22.060 "core_count": 1 00:11:22.060 } 00:11:22.060 16:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.060 16:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67516 00:11:22.060 16:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 67516 ']' 00:11:22.060 16:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 67516 00:11:22.060 16:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:11:22.060 16:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:22.060 16:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67516 00:11:22.060 16:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:22.060 killing process with pid 67516 00:11:22.060 16:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:22.060 16:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67516' 00:11:22.060 16:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 67516 00:11:22.060 [2024-11-05 16:25:34.959843] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:22.060 16:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 67516 00:11:22.320 [2024-11-05 16:25:35.223118] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:23.701 16:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.bqtE0E4cFZ 00:11:23.701 16:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:23.701 16:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:23.701 16:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:23.701 16:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:23.701 ************************************ 00:11:23.701 END TEST raid_write_error_test 00:11:23.701 ************************************ 00:11:23.701 16:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:23.701 16:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:23.701 16:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:23.701 00:11:23.701 real 0m4.881s 00:11:23.701 user 0m5.840s 00:11:23.701 sys 0m0.588s 00:11:23.701 16:25:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:23.701 16:25:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.701 16:25:36 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:23.701 16:25:36 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:11:23.701 16:25:36 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:23.701 16:25:36 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:23.701 16:25:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:23.701 ************************************ 00:11:23.701 START TEST raid_state_function_test 00:11:23.701 ************************************ 00:11:23.701 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 3 false 00:11:23.701 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:23.701 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:23.701 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:23.701 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:23.701 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:23.701 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:23.701 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:23.701 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:23.701 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:23.701 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:23.701 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:23.701 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:23.701 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:23.701 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:23.701 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:23.701 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:23.701 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:23.701 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:23.701 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:23.701 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:23.701 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:23.701 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:23.701 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:23.702 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:23.702 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:23.702 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67665 00:11:23.702 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:23.702 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67665' 00:11:23.702 Process raid pid: 67665 00:11:23.702 16:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67665 00:11:23.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.702 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 67665 ']' 00:11:23.702 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.702 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:23.702 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.702 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:23.702 16:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.702 [2024-11-05 16:25:36.722830] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:11:23.702 [2024-11-05 16:25:36.723054] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:23.960 [2024-11-05 16:25:36.906780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.960 [2024-11-05 16:25:37.041339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.218 [2024-11-05 16:25:37.278495] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:24.218 [2024-11-05 16:25:37.278655] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:24.786 16:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:24.786 16:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:11:24.786 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:24.786 16:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.786 16:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.786 [2024-11-05 16:25:37.686626] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:24.786 [2024-11-05 16:25:37.686769] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:24.786 [2024-11-05 16:25:37.686787] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:24.786 [2024-11-05 16:25:37.686800] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:24.786 [2024-11-05 16:25:37.686807] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:24.786 [2024-11-05 16:25:37.686818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:24.786 16:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.786 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:24.786 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.786 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.786 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.786 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.786 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:24.786 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.786 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.786 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.786 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.786 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.786 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.786 16:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.786 16:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.786 16:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.786 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.786 "name": "Existed_Raid", 00:11:24.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.786 "strip_size_kb": 0, 00:11:24.786 "state": "configuring", 00:11:24.786 "raid_level": "raid1", 00:11:24.786 "superblock": false, 00:11:24.786 "num_base_bdevs": 3, 00:11:24.786 "num_base_bdevs_discovered": 0, 00:11:24.786 "num_base_bdevs_operational": 3, 00:11:24.786 "base_bdevs_list": [ 00:11:24.786 { 00:11:24.786 "name": "BaseBdev1", 00:11:24.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.786 "is_configured": false, 00:11:24.786 "data_offset": 0, 00:11:24.786 "data_size": 0 00:11:24.786 }, 00:11:24.786 { 00:11:24.786 "name": "BaseBdev2", 00:11:24.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.786 "is_configured": false, 00:11:24.786 "data_offset": 0, 00:11:24.786 "data_size": 0 00:11:24.786 }, 00:11:24.786 { 00:11:24.786 "name": "BaseBdev3", 00:11:24.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.786 "is_configured": false, 00:11:24.786 "data_offset": 0, 00:11:24.786 "data_size": 0 00:11:24.786 } 00:11:24.786 ] 00:11:24.786 }' 00:11:24.786 16:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.786 16:25:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.046 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:25.046 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.046 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.046 [2024-11-05 16:25:38.129818] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:25.046 [2024-11-05 16:25:38.129942] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:25.046 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.046 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:25.046 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.046 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.312 [2024-11-05 16:25:38.141806] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:25.312 [2024-11-05 16:25:38.141934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:25.312 [2024-11-05 16:25:38.141967] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:25.312 [2024-11-05 16:25:38.141994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:25.312 [2024-11-05 16:25:38.142015] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:25.312 [2024-11-05 16:25:38.142039] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:25.312 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.312 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:25.312 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.312 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.312 [2024-11-05 16:25:38.191154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:25.312 BaseBdev1 00:11:25.312 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.312 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:25.312 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:25.312 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:25.312 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:25.312 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:25.312 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:25.312 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:25.312 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.312 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.312 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.312 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:25.312 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.312 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.312 [ 00:11:25.312 { 00:11:25.312 "name": "BaseBdev1", 00:11:25.312 "aliases": [ 00:11:25.312 "da6d36a9-08d4-4c7a-bc4f-3b856d483d23" 00:11:25.312 ], 00:11:25.312 "product_name": "Malloc disk", 00:11:25.312 "block_size": 512, 00:11:25.312 "num_blocks": 65536, 00:11:25.312 "uuid": "da6d36a9-08d4-4c7a-bc4f-3b856d483d23", 00:11:25.312 "assigned_rate_limits": { 00:11:25.312 "rw_ios_per_sec": 0, 00:11:25.312 "rw_mbytes_per_sec": 0, 00:11:25.312 "r_mbytes_per_sec": 0, 00:11:25.312 "w_mbytes_per_sec": 0 00:11:25.312 }, 00:11:25.312 "claimed": true, 00:11:25.312 "claim_type": "exclusive_write", 00:11:25.312 "zoned": false, 00:11:25.312 "supported_io_types": { 00:11:25.312 "read": true, 00:11:25.312 "write": true, 00:11:25.312 "unmap": true, 00:11:25.312 "flush": true, 00:11:25.312 "reset": true, 00:11:25.312 "nvme_admin": false, 00:11:25.312 "nvme_io": false, 00:11:25.312 "nvme_io_md": false, 00:11:25.312 "write_zeroes": true, 00:11:25.312 "zcopy": true, 00:11:25.312 "get_zone_info": false, 00:11:25.312 "zone_management": false, 00:11:25.312 "zone_append": false, 00:11:25.312 "compare": false, 00:11:25.312 "compare_and_write": false, 00:11:25.312 "abort": true, 00:11:25.312 "seek_hole": false, 00:11:25.312 "seek_data": false, 00:11:25.312 "copy": true, 00:11:25.312 "nvme_iov_md": false 00:11:25.312 }, 00:11:25.312 "memory_domains": [ 00:11:25.312 { 00:11:25.312 "dma_device_id": "system", 00:11:25.312 "dma_device_type": 1 00:11:25.312 }, 00:11:25.312 { 00:11:25.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.312 "dma_device_type": 2 00:11:25.312 } 00:11:25.312 ], 00:11:25.312 "driver_specific": {} 00:11:25.312 } 00:11:25.312 ] 00:11:25.312 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.312 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:25.312 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:25.312 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.312 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.313 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.313 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.313 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:25.313 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.313 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.313 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.313 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.313 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.313 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.313 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.313 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.313 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.313 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.313 "name": "Existed_Raid", 00:11:25.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.313 "strip_size_kb": 0, 00:11:25.313 "state": "configuring", 00:11:25.313 "raid_level": "raid1", 00:11:25.313 "superblock": false, 00:11:25.313 "num_base_bdevs": 3, 00:11:25.313 "num_base_bdevs_discovered": 1, 00:11:25.313 "num_base_bdevs_operational": 3, 00:11:25.313 "base_bdevs_list": [ 00:11:25.313 { 00:11:25.313 "name": "BaseBdev1", 00:11:25.313 "uuid": "da6d36a9-08d4-4c7a-bc4f-3b856d483d23", 00:11:25.313 "is_configured": true, 00:11:25.313 "data_offset": 0, 00:11:25.313 "data_size": 65536 00:11:25.313 }, 00:11:25.313 { 00:11:25.313 "name": "BaseBdev2", 00:11:25.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.313 "is_configured": false, 00:11:25.313 "data_offset": 0, 00:11:25.313 "data_size": 0 00:11:25.313 }, 00:11:25.313 { 00:11:25.313 "name": "BaseBdev3", 00:11:25.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.313 "is_configured": false, 00:11:25.313 "data_offset": 0, 00:11:25.313 "data_size": 0 00:11:25.313 } 00:11:25.313 ] 00:11:25.313 }' 00:11:25.313 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.313 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.885 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:25.885 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.885 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.885 [2024-11-05 16:25:38.714336] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:25.885 [2024-11-05 16:25:38.714468] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:25.885 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.885 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:25.885 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.885 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.885 [2024-11-05 16:25:38.722404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:25.885 [2024-11-05 16:25:38.724626] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:25.885 [2024-11-05 16:25:38.724678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:25.885 [2024-11-05 16:25:38.724691] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:25.885 [2024-11-05 16:25:38.724702] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:25.885 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.885 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:25.885 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:25.885 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:25.885 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.885 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.885 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.885 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.885 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:25.885 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.885 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.885 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.885 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.885 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.885 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.885 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.885 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.885 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.885 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.885 "name": "Existed_Raid", 00:11:25.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.885 "strip_size_kb": 0, 00:11:25.885 "state": "configuring", 00:11:25.885 "raid_level": "raid1", 00:11:25.885 "superblock": false, 00:11:25.885 "num_base_bdevs": 3, 00:11:25.885 "num_base_bdevs_discovered": 1, 00:11:25.885 "num_base_bdevs_operational": 3, 00:11:25.885 "base_bdevs_list": [ 00:11:25.885 { 00:11:25.885 "name": "BaseBdev1", 00:11:25.885 "uuid": "da6d36a9-08d4-4c7a-bc4f-3b856d483d23", 00:11:25.885 "is_configured": true, 00:11:25.885 "data_offset": 0, 00:11:25.885 "data_size": 65536 00:11:25.885 }, 00:11:25.885 { 00:11:25.885 "name": "BaseBdev2", 00:11:25.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.885 "is_configured": false, 00:11:25.885 "data_offset": 0, 00:11:25.885 "data_size": 0 00:11:25.885 }, 00:11:25.885 { 00:11:25.885 "name": "BaseBdev3", 00:11:25.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.885 "is_configured": false, 00:11:25.885 "data_offset": 0, 00:11:25.885 "data_size": 0 00:11:25.885 } 00:11:25.885 ] 00:11:25.885 }' 00:11:25.885 16:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.885 16:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.145 16:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:26.145 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.145 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.145 [2024-11-05 16:25:39.208658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:26.145 BaseBdev2 00:11:26.145 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.145 16:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:26.145 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:26.145 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:26.145 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:26.145 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:26.145 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:26.145 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:26.145 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.145 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.145 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.145 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:26.145 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.145 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.405 [ 00:11:26.405 { 00:11:26.405 "name": "BaseBdev2", 00:11:26.405 "aliases": [ 00:11:26.405 "c09a3d9e-19b5-4b57-9942-07b746b6d9b7" 00:11:26.405 ], 00:11:26.405 "product_name": "Malloc disk", 00:11:26.405 "block_size": 512, 00:11:26.405 "num_blocks": 65536, 00:11:26.405 "uuid": "c09a3d9e-19b5-4b57-9942-07b746b6d9b7", 00:11:26.405 "assigned_rate_limits": { 00:11:26.405 "rw_ios_per_sec": 0, 00:11:26.405 "rw_mbytes_per_sec": 0, 00:11:26.405 "r_mbytes_per_sec": 0, 00:11:26.405 "w_mbytes_per_sec": 0 00:11:26.405 }, 00:11:26.405 "claimed": true, 00:11:26.405 "claim_type": "exclusive_write", 00:11:26.405 "zoned": false, 00:11:26.405 "supported_io_types": { 00:11:26.405 "read": true, 00:11:26.405 "write": true, 00:11:26.405 "unmap": true, 00:11:26.405 "flush": true, 00:11:26.405 "reset": true, 00:11:26.405 "nvme_admin": false, 00:11:26.405 "nvme_io": false, 00:11:26.405 "nvme_io_md": false, 00:11:26.405 "write_zeroes": true, 00:11:26.405 "zcopy": true, 00:11:26.405 "get_zone_info": false, 00:11:26.405 "zone_management": false, 00:11:26.405 "zone_append": false, 00:11:26.405 "compare": false, 00:11:26.405 "compare_and_write": false, 00:11:26.405 "abort": true, 00:11:26.405 "seek_hole": false, 00:11:26.405 "seek_data": false, 00:11:26.405 "copy": true, 00:11:26.405 "nvme_iov_md": false 00:11:26.405 }, 00:11:26.405 "memory_domains": [ 00:11:26.405 { 00:11:26.405 "dma_device_id": "system", 00:11:26.405 "dma_device_type": 1 00:11:26.405 }, 00:11:26.405 { 00:11:26.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.405 "dma_device_type": 2 00:11:26.405 } 00:11:26.405 ], 00:11:26.405 "driver_specific": {} 00:11:26.405 } 00:11:26.405 ] 00:11:26.405 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.405 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:26.405 16:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:26.405 16:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:26.405 16:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:26.405 16:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.405 16:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.405 16:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.405 16:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.405 16:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:26.405 16:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.405 16:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.405 16:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.405 16:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.405 16:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.405 16:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.405 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.405 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.405 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.405 16:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.405 "name": "Existed_Raid", 00:11:26.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.405 "strip_size_kb": 0, 00:11:26.405 "state": "configuring", 00:11:26.405 "raid_level": "raid1", 00:11:26.405 "superblock": false, 00:11:26.405 "num_base_bdevs": 3, 00:11:26.405 "num_base_bdevs_discovered": 2, 00:11:26.405 "num_base_bdevs_operational": 3, 00:11:26.405 "base_bdevs_list": [ 00:11:26.405 { 00:11:26.405 "name": "BaseBdev1", 00:11:26.405 "uuid": "da6d36a9-08d4-4c7a-bc4f-3b856d483d23", 00:11:26.405 "is_configured": true, 00:11:26.405 "data_offset": 0, 00:11:26.405 "data_size": 65536 00:11:26.405 }, 00:11:26.405 { 00:11:26.405 "name": "BaseBdev2", 00:11:26.405 "uuid": "c09a3d9e-19b5-4b57-9942-07b746b6d9b7", 00:11:26.405 "is_configured": true, 00:11:26.405 "data_offset": 0, 00:11:26.405 "data_size": 65536 00:11:26.405 }, 00:11:26.405 { 00:11:26.405 "name": "BaseBdev3", 00:11:26.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.405 "is_configured": false, 00:11:26.405 "data_offset": 0, 00:11:26.405 "data_size": 0 00:11:26.405 } 00:11:26.405 ] 00:11:26.405 }' 00:11:26.405 16:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.405 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.680 16:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:26.680 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.680 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.939 [2024-11-05 16:25:39.779441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:26.939 [2024-11-05 16:25:39.779598] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:26.939 [2024-11-05 16:25:39.779655] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:26.939 [2024-11-05 16:25:39.779989] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:26.939 [2024-11-05 16:25:39.780230] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:26.939 [2024-11-05 16:25:39.780276] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:26.939 [2024-11-05 16:25:39.780653] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.939 BaseBdev3 00:11:26.939 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.939 16:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:26.939 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:26.939 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:26.939 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:26.939 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:26.939 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:26.939 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:26.939 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.939 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.939 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.939 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:26.939 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.939 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.939 [ 00:11:26.939 { 00:11:26.939 "name": "BaseBdev3", 00:11:26.939 "aliases": [ 00:11:26.939 "be50aeca-2088-41ab-ae5c-1ece7aa3f5cf" 00:11:26.939 ], 00:11:26.939 "product_name": "Malloc disk", 00:11:26.939 "block_size": 512, 00:11:26.939 "num_blocks": 65536, 00:11:26.939 "uuid": "be50aeca-2088-41ab-ae5c-1ece7aa3f5cf", 00:11:26.939 "assigned_rate_limits": { 00:11:26.939 "rw_ios_per_sec": 0, 00:11:26.939 "rw_mbytes_per_sec": 0, 00:11:26.939 "r_mbytes_per_sec": 0, 00:11:26.939 "w_mbytes_per_sec": 0 00:11:26.939 }, 00:11:26.939 "claimed": true, 00:11:26.939 "claim_type": "exclusive_write", 00:11:26.939 "zoned": false, 00:11:26.939 "supported_io_types": { 00:11:26.939 "read": true, 00:11:26.939 "write": true, 00:11:26.939 "unmap": true, 00:11:26.939 "flush": true, 00:11:26.939 "reset": true, 00:11:26.939 "nvme_admin": false, 00:11:26.939 "nvme_io": false, 00:11:26.939 "nvme_io_md": false, 00:11:26.939 "write_zeroes": true, 00:11:26.939 "zcopy": true, 00:11:26.939 "get_zone_info": false, 00:11:26.939 "zone_management": false, 00:11:26.939 "zone_append": false, 00:11:26.939 "compare": false, 00:11:26.939 "compare_and_write": false, 00:11:26.939 "abort": true, 00:11:26.939 "seek_hole": false, 00:11:26.939 "seek_data": false, 00:11:26.939 "copy": true, 00:11:26.939 "nvme_iov_md": false 00:11:26.939 }, 00:11:26.939 "memory_domains": [ 00:11:26.939 { 00:11:26.939 "dma_device_id": "system", 00:11:26.939 "dma_device_type": 1 00:11:26.939 }, 00:11:26.939 { 00:11:26.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.939 "dma_device_type": 2 00:11:26.939 } 00:11:26.939 ], 00:11:26.939 "driver_specific": {} 00:11:26.939 } 00:11:26.939 ] 00:11:26.939 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.939 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:26.939 16:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:26.939 16:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:26.939 16:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:26.939 16:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.939 16:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:26.939 16:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.939 16:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.940 16:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:26.940 16:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.940 16:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.940 16:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.940 16:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.940 16:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.940 16:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.940 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.940 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.940 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.940 16:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.940 "name": "Existed_Raid", 00:11:26.940 "uuid": "92afca8c-b2db-4b8e-9a8d-6648f4732369", 00:11:26.940 "strip_size_kb": 0, 00:11:26.940 "state": "online", 00:11:26.940 "raid_level": "raid1", 00:11:26.940 "superblock": false, 00:11:26.940 "num_base_bdevs": 3, 00:11:26.940 "num_base_bdevs_discovered": 3, 00:11:26.940 "num_base_bdevs_operational": 3, 00:11:26.940 "base_bdevs_list": [ 00:11:26.940 { 00:11:26.940 "name": "BaseBdev1", 00:11:26.940 "uuid": "da6d36a9-08d4-4c7a-bc4f-3b856d483d23", 00:11:26.940 "is_configured": true, 00:11:26.940 "data_offset": 0, 00:11:26.940 "data_size": 65536 00:11:26.940 }, 00:11:26.940 { 00:11:26.940 "name": "BaseBdev2", 00:11:26.940 "uuid": "c09a3d9e-19b5-4b57-9942-07b746b6d9b7", 00:11:26.940 "is_configured": true, 00:11:26.940 "data_offset": 0, 00:11:26.940 "data_size": 65536 00:11:26.940 }, 00:11:26.940 { 00:11:26.940 "name": "BaseBdev3", 00:11:26.940 "uuid": "be50aeca-2088-41ab-ae5c-1ece7aa3f5cf", 00:11:26.940 "is_configured": true, 00:11:26.940 "data_offset": 0, 00:11:26.940 "data_size": 65536 00:11:26.940 } 00:11:26.940 ] 00:11:26.940 }' 00:11:26.940 16:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.940 16:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.199 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:27.199 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:27.199 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:27.199 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:27.199 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:27.199 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:27.199 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:27.199 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:27.199 16:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.199 16:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.199 [2024-11-05 16:25:40.223152] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:27.199 16:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.199 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:27.199 "name": "Existed_Raid", 00:11:27.199 "aliases": [ 00:11:27.199 "92afca8c-b2db-4b8e-9a8d-6648f4732369" 00:11:27.199 ], 00:11:27.199 "product_name": "Raid Volume", 00:11:27.199 "block_size": 512, 00:11:27.199 "num_blocks": 65536, 00:11:27.199 "uuid": "92afca8c-b2db-4b8e-9a8d-6648f4732369", 00:11:27.199 "assigned_rate_limits": { 00:11:27.199 "rw_ios_per_sec": 0, 00:11:27.199 "rw_mbytes_per_sec": 0, 00:11:27.200 "r_mbytes_per_sec": 0, 00:11:27.200 "w_mbytes_per_sec": 0 00:11:27.200 }, 00:11:27.200 "claimed": false, 00:11:27.200 "zoned": false, 00:11:27.200 "supported_io_types": { 00:11:27.200 "read": true, 00:11:27.200 "write": true, 00:11:27.200 "unmap": false, 00:11:27.200 "flush": false, 00:11:27.200 "reset": true, 00:11:27.200 "nvme_admin": false, 00:11:27.200 "nvme_io": false, 00:11:27.200 "nvme_io_md": false, 00:11:27.200 "write_zeroes": true, 00:11:27.200 "zcopy": false, 00:11:27.200 "get_zone_info": false, 00:11:27.200 "zone_management": false, 00:11:27.200 "zone_append": false, 00:11:27.200 "compare": false, 00:11:27.200 "compare_and_write": false, 00:11:27.200 "abort": false, 00:11:27.200 "seek_hole": false, 00:11:27.200 "seek_data": false, 00:11:27.200 "copy": false, 00:11:27.200 "nvme_iov_md": false 00:11:27.200 }, 00:11:27.200 "memory_domains": [ 00:11:27.200 { 00:11:27.200 "dma_device_id": "system", 00:11:27.200 "dma_device_type": 1 00:11:27.200 }, 00:11:27.200 { 00:11:27.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.200 "dma_device_type": 2 00:11:27.200 }, 00:11:27.200 { 00:11:27.200 "dma_device_id": "system", 00:11:27.200 "dma_device_type": 1 00:11:27.200 }, 00:11:27.200 { 00:11:27.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.200 "dma_device_type": 2 00:11:27.200 }, 00:11:27.200 { 00:11:27.200 "dma_device_id": "system", 00:11:27.200 "dma_device_type": 1 00:11:27.200 }, 00:11:27.200 { 00:11:27.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.200 "dma_device_type": 2 00:11:27.200 } 00:11:27.200 ], 00:11:27.200 "driver_specific": { 00:11:27.200 "raid": { 00:11:27.200 "uuid": "92afca8c-b2db-4b8e-9a8d-6648f4732369", 00:11:27.200 "strip_size_kb": 0, 00:11:27.200 "state": "online", 00:11:27.200 "raid_level": "raid1", 00:11:27.200 "superblock": false, 00:11:27.200 "num_base_bdevs": 3, 00:11:27.200 "num_base_bdevs_discovered": 3, 00:11:27.200 "num_base_bdevs_operational": 3, 00:11:27.200 "base_bdevs_list": [ 00:11:27.200 { 00:11:27.200 "name": "BaseBdev1", 00:11:27.200 "uuid": "da6d36a9-08d4-4c7a-bc4f-3b856d483d23", 00:11:27.200 "is_configured": true, 00:11:27.200 "data_offset": 0, 00:11:27.200 "data_size": 65536 00:11:27.200 }, 00:11:27.200 { 00:11:27.200 "name": "BaseBdev2", 00:11:27.200 "uuid": "c09a3d9e-19b5-4b57-9942-07b746b6d9b7", 00:11:27.200 "is_configured": true, 00:11:27.200 "data_offset": 0, 00:11:27.200 "data_size": 65536 00:11:27.200 }, 00:11:27.200 { 00:11:27.200 "name": "BaseBdev3", 00:11:27.200 "uuid": "be50aeca-2088-41ab-ae5c-1ece7aa3f5cf", 00:11:27.200 "is_configured": true, 00:11:27.200 "data_offset": 0, 00:11:27.200 "data_size": 65536 00:11:27.200 } 00:11:27.200 ] 00:11:27.200 } 00:11:27.200 } 00:11:27.200 }' 00:11:27.200 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:27.459 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:27.459 BaseBdev2 00:11:27.459 BaseBdev3' 00:11:27.459 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.459 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:27.459 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.459 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:27.459 16:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.459 16:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.459 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.459 16:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.459 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.459 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.459 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.459 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:27.459 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.459 16:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.460 16:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.460 16:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.460 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.460 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.460 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.460 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:27.460 16:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.460 16:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.460 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.460 16:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.460 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.460 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.460 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:27.460 16:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.460 16:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.460 [2024-11-05 16:25:40.514400] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:27.719 16:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.719 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:27.719 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:27.719 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:27.719 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:27.719 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:27.719 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:27.719 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.719 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:27.719 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.719 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.719 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:27.719 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.719 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.719 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.719 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.719 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.719 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.719 16:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.719 16:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.719 16:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.719 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.719 "name": "Existed_Raid", 00:11:27.719 "uuid": "92afca8c-b2db-4b8e-9a8d-6648f4732369", 00:11:27.719 "strip_size_kb": 0, 00:11:27.719 "state": "online", 00:11:27.719 "raid_level": "raid1", 00:11:27.719 "superblock": false, 00:11:27.719 "num_base_bdevs": 3, 00:11:27.719 "num_base_bdevs_discovered": 2, 00:11:27.719 "num_base_bdevs_operational": 2, 00:11:27.719 "base_bdevs_list": [ 00:11:27.719 { 00:11:27.719 "name": null, 00:11:27.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.719 "is_configured": false, 00:11:27.719 "data_offset": 0, 00:11:27.719 "data_size": 65536 00:11:27.719 }, 00:11:27.719 { 00:11:27.719 "name": "BaseBdev2", 00:11:27.719 "uuid": "c09a3d9e-19b5-4b57-9942-07b746b6d9b7", 00:11:27.719 "is_configured": true, 00:11:27.719 "data_offset": 0, 00:11:27.719 "data_size": 65536 00:11:27.719 }, 00:11:27.719 { 00:11:27.719 "name": "BaseBdev3", 00:11:27.719 "uuid": "be50aeca-2088-41ab-ae5c-1ece7aa3f5cf", 00:11:27.719 "is_configured": true, 00:11:27.719 "data_offset": 0, 00:11:27.719 "data_size": 65536 00:11:27.719 } 00:11:27.719 ] 00:11:27.719 }' 00:11:27.719 16:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.719 16:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.979 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:27.979 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:27.979 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.979 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:27.979 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.979 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.238 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.238 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:28.238 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:28.238 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:28.238 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.238 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.238 [2024-11-05 16:25:41.098222] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:28.238 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.238 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:28.238 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:28.238 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.238 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:28.238 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.238 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.238 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.238 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:28.238 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:28.238 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:28.238 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.238 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.238 [2024-11-05 16:25:41.268250] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:28.238 [2024-11-05 16:25:41.268371] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:28.498 [2024-11-05 16:25:41.373245] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:28.498 [2024-11-05 16:25:41.373311] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:28.498 [2024-11-05 16:25:41.373325] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.498 BaseBdev2 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.498 [ 00:11:28.498 { 00:11:28.498 "name": "BaseBdev2", 00:11:28.498 "aliases": [ 00:11:28.498 "d4c7bab6-0d72-4200-ae59-fe8532ac41b2" 00:11:28.498 ], 00:11:28.498 "product_name": "Malloc disk", 00:11:28.498 "block_size": 512, 00:11:28.498 "num_blocks": 65536, 00:11:28.498 "uuid": "d4c7bab6-0d72-4200-ae59-fe8532ac41b2", 00:11:28.498 "assigned_rate_limits": { 00:11:28.498 "rw_ios_per_sec": 0, 00:11:28.498 "rw_mbytes_per_sec": 0, 00:11:28.498 "r_mbytes_per_sec": 0, 00:11:28.498 "w_mbytes_per_sec": 0 00:11:28.498 }, 00:11:28.498 "claimed": false, 00:11:28.498 "zoned": false, 00:11:28.498 "supported_io_types": { 00:11:28.498 "read": true, 00:11:28.498 "write": true, 00:11:28.498 "unmap": true, 00:11:28.498 "flush": true, 00:11:28.498 "reset": true, 00:11:28.498 "nvme_admin": false, 00:11:28.498 "nvme_io": false, 00:11:28.498 "nvme_io_md": false, 00:11:28.498 "write_zeroes": true, 00:11:28.498 "zcopy": true, 00:11:28.498 "get_zone_info": false, 00:11:28.498 "zone_management": false, 00:11:28.498 "zone_append": false, 00:11:28.498 "compare": false, 00:11:28.498 "compare_and_write": false, 00:11:28.498 "abort": true, 00:11:28.498 "seek_hole": false, 00:11:28.498 "seek_data": false, 00:11:28.498 "copy": true, 00:11:28.498 "nvme_iov_md": false 00:11:28.498 }, 00:11:28.498 "memory_domains": [ 00:11:28.498 { 00:11:28.498 "dma_device_id": "system", 00:11:28.498 "dma_device_type": 1 00:11:28.498 }, 00:11:28.498 { 00:11:28.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.498 "dma_device_type": 2 00:11:28.498 } 00:11:28.498 ], 00:11:28.498 "driver_specific": {} 00:11:28.498 } 00:11:28.498 ] 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.498 BaseBdev3 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:28.498 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.499 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.499 [ 00:11:28.499 { 00:11:28.499 "name": "BaseBdev3", 00:11:28.499 "aliases": [ 00:11:28.499 "60c05c97-dff3-470a-b75d-cccb5d07e2cb" 00:11:28.499 ], 00:11:28.499 "product_name": "Malloc disk", 00:11:28.499 "block_size": 512, 00:11:28.499 "num_blocks": 65536, 00:11:28.499 "uuid": "60c05c97-dff3-470a-b75d-cccb5d07e2cb", 00:11:28.499 "assigned_rate_limits": { 00:11:28.499 "rw_ios_per_sec": 0, 00:11:28.499 "rw_mbytes_per_sec": 0, 00:11:28.499 "r_mbytes_per_sec": 0, 00:11:28.499 "w_mbytes_per_sec": 0 00:11:28.499 }, 00:11:28.499 "claimed": false, 00:11:28.499 "zoned": false, 00:11:28.499 "supported_io_types": { 00:11:28.499 "read": true, 00:11:28.499 "write": true, 00:11:28.499 "unmap": true, 00:11:28.499 "flush": true, 00:11:28.499 "reset": true, 00:11:28.499 "nvme_admin": false, 00:11:28.499 "nvme_io": false, 00:11:28.499 "nvme_io_md": false, 00:11:28.499 "write_zeroes": true, 00:11:28.499 "zcopy": true, 00:11:28.499 "get_zone_info": false, 00:11:28.499 "zone_management": false, 00:11:28.499 "zone_append": false, 00:11:28.499 "compare": false, 00:11:28.499 "compare_and_write": false, 00:11:28.499 "abort": true, 00:11:28.499 "seek_hole": false, 00:11:28.499 "seek_data": false, 00:11:28.499 "copy": true, 00:11:28.499 "nvme_iov_md": false 00:11:28.499 }, 00:11:28.499 "memory_domains": [ 00:11:28.499 { 00:11:28.499 "dma_device_id": "system", 00:11:28.499 "dma_device_type": 1 00:11:28.499 }, 00:11:28.499 { 00:11:28.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.499 "dma_device_type": 2 00:11:28.499 } 00:11:28.499 ], 00:11:28.499 "driver_specific": {} 00:11:28.499 } 00:11:28.499 ] 00:11:28.759 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.759 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:28.759 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:28.759 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:28.759 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:28.759 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.759 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.759 [2024-11-05 16:25:41.596112] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:28.759 [2024-11-05 16:25:41.596188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:28.759 [2024-11-05 16:25:41.596217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:28.759 [2024-11-05 16:25:41.598389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:28.759 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.759 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:28.759 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.759 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.759 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.759 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.759 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:28.759 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.759 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.759 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.759 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.759 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.759 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.759 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.759 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.759 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.759 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.759 "name": "Existed_Raid", 00:11:28.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.759 "strip_size_kb": 0, 00:11:28.759 "state": "configuring", 00:11:28.759 "raid_level": "raid1", 00:11:28.759 "superblock": false, 00:11:28.759 "num_base_bdevs": 3, 00:11:28.759 "num_base_bdevs_discovered": 2, 00:11:28.759 "num_base_bdevs_operational": 3, 00:11:28.759 "base_bdevs_list": [ 00:11:28.759 { 00:11:28.759 "name": "BaseBdev1", 00:11:28.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.759 "is_configured": false, 00:11:28.759 "data_offset": 0, 00:11:28.759 "data_size": 0 00:11:28.759 }, 00:11:28.759 { 00:11:28.759 "name": "BaseBdev2", 00:11:28.759 "uuid": "d4c7bab6-0d72-4200-ae59-fe8532ac41b2", 00:11:28.759 "is_configured": true, 00:11:28.759 "data_offset": 0, 00:11:28.759 "data_size": 65536 00:11:28.759 }, 00:11:28.759 { 00:11:28.759 "name": "BaseBdev3", 00:11:28.759 "uuid": "60c05c97-dff3-470a-b75d-cccb5d07e2cb", 00:11:28.759 "is_configured": true, 00:11:28.759 "data_offset": 0, 00:11:28.759 "data_size": 65536 00:11:28.759 } 00:11:28.759 ] 00:11:28.759 }' 00:11:28.759 16:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.759 16:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.019 16:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:29.019 16:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.019 16:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.019 [2024-11-05 16:25:42.067372] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:29.019 16:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.019 16:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:29.019 16:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.019 16:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.019 16:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.019 16:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.019 16:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:29.019 16:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.019 16:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.019 16:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.019 16:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.019 16:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.019 16:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.019 16:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.019 16:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.019 16:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.279 16:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.279 "name": "Existed_Raid", 00:11:29.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.279 "strip_size_kb": 0, 00:11:29.279 "state": "configuring", 00:11:29.279 "raid_level": "raid1", 00:11:29.279 "superblock": false, 00:11:29.279 "num_base_bdevs": 3, 00:11:29.279 "num_base_bdevs_discovered": 1, 00:11:29.279 "num_base_bdevs_operational": 3, 00:11:29.279 "base_bdevs_list": [ 00:11:29.279 { 00:11:29.279 "name": "BaseBdev1", 00:11:29.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.279 "is_configured": false, 00:11:29.279 "data_offset": 0, 00:11:29.279 "data_size": 0 00:11:29.279 }, 00:11:29.279 { 00:11:29.279 "name": null, 00:11:29.279 "uuid": "d4c7bab6-0d72-4200-ae59-fe8532ac41b2", 00:11:29.279 "is_configured": false, 00:11:29.279 "data_offset": 0, 00:11:29.279 "data_size": 65536 00:11:29.279 }, 00:11:29.279 { 00:11:29.279 "name": "BaseBdev3", 00:11:29.279 "uuid": "60c05c97-dff3-470a-b75d-cccb5d07e2cb", 00:11:29.279 "is_configured": true, 00:11:29.279 "data_offset": 0, 00:11:29.279 "data_size": 65536 00:11:29.279 } 00:11:29.279 ] 00:11:29.279 }' 00:11:29.279 16:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.279 16:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.539 16:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:29.539 16:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.539 16:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.539 16:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.539 16:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.539 16:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:29.539 16:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:29.539 16:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.539 16:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.539 [2024-11-05 16:25:42.612690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:29.539 BaseBdev1 00:11:29.539 16:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.539 16:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:29.539 16:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:29.539 16:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:29.539 16:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:29.539 16:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:29.539 16:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:29.539 16:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:29.539 16:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.539 16:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.539 16:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.539 16:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:29.539 16:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.539 16:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.814 [ 00:11:29.814 { 00:11:29.814 "name": "BaseBdev1", 00:11:29.814 "aliases": [ 00:11:29.814 "90c0e945-ac7c-461d-a675-9e439b98f98e" 00:11:29.814 ], 00:11:29.814 "product_name": "Malloc disk", 00:11:29.814 "block_size": 512, 00:11:29.814 "num_blocks": 65536, 00:11:29.814 "uuid": "90c0e945-ac7c-461d-a675-9e439b98f98e", 00:11:29.814 "assigned_rate_limits": { 00:11:29.814 "rw_ios_per_sec": 0, 00:11:29.814 "rw_mbytes_per_sec": 0, 00:11:29.814 "r_mbytes_per_sec": 0, 00:11:29.814 "w_mbytes_per_sec": 0 00:11:29.814 }, 00:11:29.814 "claimed": true, 00:11:29.814 "claim_type": "exclusive_write", 00:11:29.814 "zoned": false, 00:11:29.814 "supported_io_types": { 00:11:29.814 "read": true, 00:11:29.814 "write": true, 00:11:29.814 "unmap": true, 00:11:29.814 "flush": true, 00:11:29.814 "reset": true, 00:11:29.814 "nvme_admin": false, 00:11:29.814 "nvme_io": false, 00:11:29.814 "nvme_io_md": false, 00:11:29.814 "write_zeroes": true, 00:11:29.814 "zcopy": true, 00:11:29.814 "get_zone_info": false, 00:11:29.814 "zone_management": false, 00:11:29.814 "zone_append": false, 00:11:29.814 "compare": false, 00:11:29.814 "compare_and_write": false, 00:11:29.814 "abort": true, 00:11:29.814 "seek_hole": false, 00:11:29.814 "seek_data": false, 00:11:29.814 "copy": true, 00:11:29.814 "nvme_iov_md": false 00:11:29.814 }, 00:11:29.814 "memory_domains": [ 00:11:29.814 { 00:11:29.814 "dma_device_id": "system", 00:11:29.814 "dma_device_type": 1 00:11:29.814 }, 00:11:29.814 { 00:11:29.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.814 "dma_device_type": 2 00:11:29.814 } 00:11:29.814 ], 00:11:29.814 "driver_specific": {} 00:11:29.814 } 00:11:29.814 ] 00:11:29.814 16:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.814 16:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:29.814 16:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:29.814 16:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.814 16:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.814 16:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.814 16:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.814 16:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:29.814 16:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.814 16:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.814 16:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.814 16:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.814 16:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.814 16:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.814 16:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.814 16:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.814 16:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.814 16:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.814 "name": "Existed_Raid", 00:11:29.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.814 "strip_size_kb": 0, 00:11:29.814 "state": "configuring", 00:11:29.814 "raid_level": "raid1", 00:11:29.814 "superblock": false, 00:11:29.814 "num_base_bdevs": 3, 00:11:29.814 "num_base_bdevs_discovered": 2, 00:11:29.814 "num_base_bdevs_operational": 3, 00:11:29.814 "base_bdevs_list": [ 00:11:29.814 { 00:11:29.814 "name": "BaseBdev1", 00:11:29.814 "uuid": "90c0e945-ac7c-461d-a675-9e439b98f98e", 00:11:29.814 "is_configured": true, 00:11:29.814 "data_offset": 0, 00:11:29.814 "data_size": 65536 00:11:29.814 }, 00:11:29.814 { 00:11:29.814 "name": null, 00:11:29.814 "uuid": "d4c7bab6-0d72-4200-ae59-fe8532ac41b2", 00:11:29.814 "is_configured": false, 00:11:29.814 "data_offset": 0, 00:11:29.814 "data_size": 65536 00:11:29.814 }, 00:11:29.814 { 00:11:29.814 "name": "BaseBdev3", 00:11:29.814 "uuid": "60c05c97-dff3-470a-b75d-cccb5d07e2cb", 00:11:29.814 "is_configured": true, 00:11:29.814 "data_offset": 0, 00:11:29.814 "data_size": 65536 00:11:29.814 } 00:11:29.814 ] 00:11:29.814 }' 00:11:29.814 16:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.814 16:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.118 16:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.118 16:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.118 16:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.118 16:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:30.118 16:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.379 16:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:30.380 16:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:30.380 16:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.380 16:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.380 [2024-11-05 16:25:43.207928] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:30.380 16:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.380 16:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:30.380 16:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.380 16:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.380 16:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.380 16:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.380 16:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:30.380 16:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.380 16:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.380 16:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.380 16:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.380 16:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.380 16:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.380 16:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.380 16:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.380 16:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.380 16:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.380 "name": "Existed_Raid", 00:11:30.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.380 "strip_size_kb": 0, 00:11:30.380 "state": "configuring", 00:11:30.380 "raid_level": "raid1", 00:11:30.380 "superblock": false, 00:11:30.380 "num_base_bdevs": 3, 00:11:30.380 "num_base_bdevs_discovered": 1, 00:11:30.380 "num_base_bdevs_operational": 3, 00:11:30.380 "base_bdevs_list": [ 00:11:30.380 { 00:11:30.380 "name": "BaseBdev1", 00:11:30.380 "uuid": "90c0e945-ac7c-461d-a675-9e439b98f98e", 00:11:30.380 "is_configured": true, 00:11:30.380 "data_offset": 0, 00:11:30.380 "data_size": 65536 00:11:30.380 }, 00:11:30.380 { 00:11:30.380 "name": null, 00:11:30.380 "uuid": "d4c7bab6-0d72-4200-ae59-fe8532ac41b2", 00:11:30.380 "is_configured": false, 00:11:30.380 "data_offset": 0, 00:11:30.380 "data_size": 65536 00:11:30.380 }, 00:11:30.380 { 00:11:30.380 "name": null, 00:11:30.380 "uuid": "60c05c97-dff3-470a-b75d-cccb5d07e2cb", 00:11:30.380 "is_configured": false, 00:11:30.380 "data_offset": 0, 00:11:30.380 "data_size": 65536 00:11:30.380 } 00:11:30.380 ] 00:11:30.380 }' 00:11:30.380 16:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.380 16:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.643 16:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.643 16:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:30.643 16:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.643 16:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.901 16:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.901 16:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:30.901 16:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:30.901 16:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.902 16:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.902 [2024-11-05 16:25:43.778986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:30.902 16:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.902 16:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:30.902 16:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.902 16:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.902 16:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.902 16:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.902 16:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:30.902 16:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.902 16:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.902 16:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.902 16:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.902 16:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.902 16:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.902 16:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.902 16:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.902 16:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.902 16:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.902 "name": "Existed_Raid", 00:11:30.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.902 "strip_size_kb": 0, 00:11:30.902 "state": "configuring", 00:11:30.902 "raid_level": "raid1", 00:11:30.902 "superblock": false, 00:11:30.902 "num_base_bdevs": 3, 00:11:30.902 "num_base_bdevs_discovered": 2, 00:11:30.902 "num_base_bdevs_operational": 3, 00:11:30.902 "base_bdevs_list": [ 00:11:30.902 { 00:11:30.902 "name": "BaseBdev1", 00:11:30.902 "uuid": "90c0e945-ac7c-461d-a675-9e439b98f98e", 00:11:30.902 "is_configured": true, 00:11:30.902 "data_offset": 0, 00:11:30.902 "data_size": 65536 00:11:30.902 }, 00:11:30.902 { 00:11:30.902 "name": null, 00:11:30.902 "uuid": "d4c7bab6-0d72-4200-ae59-fe8532ac41b2", 00:11:30.902 "is_configured": false, 00:11:30.902 "data_offset": 0, 00:11:30.902 "data_size": 65536 00:11:30.902 }, 00:11:30.902 { 00:11:30.902 "name": "BaseBdev3", 00:11:30.902 "uuid": "60c05c97-dff3-470a-b75d-cccb5d07e2cb", 00:11:30.902 "is_configured": true, 00:11:30.902 "data_offset": 0, 00:11:30.902 "data_size": 65536 00:11:30.902 } 00:11:30.902 ] 00:11:30.902 }' 00:11:30.902 16:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.902 16:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.470 16:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.470 16:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.470 16:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.470 16:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:31.470 16:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.470 16:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:31.470 16:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:31.470 16:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.470 16:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.470 [2024-11-05 16:25:44.318186] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:31.470 16:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.470 16:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:31.470 16:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.470 16:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.470 16:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.470 16:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.470 16:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:31.470 16:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.470 16:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.470 16:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.471 16:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.471 16:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.471 16:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.471 16:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.471 16:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.471 16:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.471 16:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.471 "name": "Existed_Raid", 00:11:31.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.471 "strip_size_kb": 0, 00:11:31.471 "state": "configuring", 00:11:31.471 "raid_level": "raid1", 00:11:31.471 "superblock": false, 00:11:31.471 "num_base_bdevs": 3, 00:11:31.471 "num_base_bdevs_discovered": 1, 00:11:31.471 "num_base_bdevs_operational": 3, 00:11:31.471 "base_bdevs_list": [ 00:11:31.471 { 00:11:31.471 "name": null, 00:11:31.471 "uuid": "90c0e945-ac7c-461d-a675-9e439b98f98e", 00:11:31.471 "is_configured": false, 00:11:31.471 "data_offset": 0, 00:11:31.471 "data_size": 65536 00:11:31.471 }, 00:11:31.471 { 00:11:31.471 "name": null, 00:11:31.471 "uuid": "d4c7bab6-0d72-4200-ae59-fe8532ac41b2", 00:11:31.471 "is_configured": false, 00:11:31.471 "data_offset": 0, 00:11:31.471 "data_size": 65536 00:11:31.471 }, 00:11:31.471 { 00:11:31.471 "name": "BaseBdev3", 00:11:31.471 "uuid": "60c05c97-dff3-470a-b75d-cccb5d07e2cb", 00:11:31.471 "is_configured": true, 00:11:31.471 "data_offset": 0, 00:11:31.471 "data_size": 65536 00:11:31.471 } 00:11:31.471 ] 00:11:31.471 }' 00:11:31.471 16:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.471 16:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.039 16:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.039 16:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.039 16:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.039 16:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:32.039 16:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.039 16:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:32.039 16:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:32.039 16:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.039 16:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.039 [2024-11-05 16:25:44.988659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:32.039 16:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.039 16:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:32.039 16:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.039 16:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.039 16:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.039 16:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.039 16:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:32.039 16:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.039 16:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.039 16:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.039 16:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.039 16:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.039 16:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.039 16:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.039 16:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.039 16:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.039 16:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.039 "name": "Existed_Raid", 00:11:32.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.039 "strip_size_kb": 0, 00:11:32.039 "state": "configuring", 00:11:32.039 "raid_level": "raid1", 00:11:32.039 "superblock": false, 00:11:32.039 "num_base_bdevs": 3, 00:11:32.039 "num_base_bdevs_discovered": 2, 00:11:32.039 "num_base_bdevs_operational": 3, 00:11:32.039 "base_bdevs_list": [ 00:11:32.039 { 00:11:32.039 "name": null, 00:11:32.039 "uuid": "90c0e945-ac7c-461d-a675-9e439b98f98e", 00:11:32.039 "is_configured": false, 00:11:32.039 "data_offset": 0, 00:11:32.039 "data_size": 65536 00:11:32.039 }, 00:11:32.039 { 00:11:32.039 "name": "BaseBdev2", 00:11:32.039 "uuid": "d4c7bab6-0d72-4200-ae59-fe8532ac41b2", 00:11:32.039 "is_configured": true, 00:11:32.039 "data_offset": 0, 00:11:32.039 "data_size": 65536 00:11:32.039 }, 00:11:32.039 { 00:11:32.039 "name": "BaseBdev3", 00:11:32.039 "uuid": "60c05c97-dff3-470a-b75d-cccb5d07e2cb", 00:11:32.039 "is_configured": true, 00:11:32.039 "data_offset": 0, 00:11:32.039 "data_size": 65536 00:11:32.039 } 00:11:32.039 ] 00:11:32.039 }' 00:11:32.039 16:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.039 16:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.609 16:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:32.609 16:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.609 16:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.609 16:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.609 16:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.609 16:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:32.609 16:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.609 16:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.609 16:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.609 16:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:32.609 16:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.609 16:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 90c0e945-ac7c-461d-a675-9e439b98f98e 00:11:32.609 16:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.609 16:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.609 [2024-11-05 16:25:45.620622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:32.609 [2024-11-05 16:25:45.620695] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:32.609 [2024-11-05 16:25:45.620705] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:32.609 [2024-11-05 16:25:45.620996] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:32.609 [2024-11-05 16:25:45.621196] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:32.609 [2024-11-05 16:25:45.621219] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:32.609 [2024-11-05 16:25:45.621537] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:32.609 NewBaseBdev 00:11:32.609 16:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.609 16:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:32.609 16:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:11:32.609 16:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:32.609 16:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:32.609 16:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:32.609 16:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:32.609 16:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:32.609 16:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.609 16:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.609 16:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.609 16:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:32.609 16:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.609 16:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.609 [ 00:11:32.609 { 00:11:32.609 "name": "NewBaseBdev", 00:11:32.609 "aliases": [ 00:11:32.609 "90c0e945-ac7c-461d-a675-9e439b98f98e" 00:11:32.609 ], 00:11:32.609 "product_name": "Malloc disk", 00:11:32.609 "block_size": 512, 00:11:32.609 "num_blocks": 65536, 00:11:32.609 "uuid": "90c0e945-ac7c-461d-a675-9e439b98f98e", 00:11:32.609 "assigned_rate_limits": { 00:11:32.609 "rw_ios_per_sec": 0, 00:11:32.609 "rw_mbytes_per_sec": 0, 00:11:32.609 "r_mbytes_per_sec": 0, 00:11:32.609 "w_mbytes_per_sec": 0 00:11:32.609 }, 00:11:32.609 "claimed": true, 00:11:32.609 "claim_type": "exclusive_write", 00:11:32.609 "zoned": false, 00:11:32.609 "supported_io_types": { 00:11:32.609 "read": true, 00:11:32.609 "write": true, 00:11:32.609 "unmap": true, 00:11:32.609 "flush": true, 00:11:32.609 "reset": true, 00:11:32.609 "nvme_admin": false, 00:11:32.609 "nvme_io": false, 00:11:32.609 "nvme_io_md": false, 00:11:32.609 "write_zeroes": true, 00:11:32.609 "zcopy": true, 00:11:32.609 "get_zone_info": false, 00:11:32.609 "zone_management": false, 00:11:32.609 "zone_append": false, 00:11:32.609 "compare": false, 00:11:32.609 "compare_and_write": false, 00:11:32.609 "abort": true, 00:11:32.609 "seek_hole": false, 00:11:32.609 "seek_data": false, 00:11:32.609 "copy": true, 00:11:32.609 "nvme_iov_md": false 00:11:32.609 }, 00:11:32.609 "memory_domains": [ 00:11:32.609 { 00:11:32.609 "dma_device_id": "system", 00:11:32.610 "dma_device_type": 1 00:11:32.610 }, 00:11:32.610 { 00:11:32.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.610 "dma_device_type": 2 00:11:32.610 } 00:11:32.610 ], 00:11:32.610 "driver_specific": {} 00:11:32.610 } 00:11:32.610 ] 00:11:32.610 16:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.610 16:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:32.610 16:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:32.610 16:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.610 16:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:32.610 16:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.610 16:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.610 16:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:32.610 16:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.610 16:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.610 16:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.610 16:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.610 16:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.610 16:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.610 16:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.610 16:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.610 16:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.869 16:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.869 "name": "Existed_Raid", 00:11:32.869 "uuid": "dfc74acc-26f4-46e4-906f-5a2b23e1f7c0", 00:11:32.869 "strip_size_kb": 0, 00:11:32.869 "state": "online", 00:11:32.869 "raid_level": "raid1", 00:11:32.869 "superblock": false, 00:11:32.869 "num_base_bdevs": 3, 00:11:32.869 "num_base_bdevs_discovered": 3, 00:11:32.869 "num_base_bdevs_operational": 3, 00:11:32.869 "base_bdevs_list": [ 00:11:32.869 { 00:11:32.869 "name": "NewBaseBdev", 00:11:32.869 "uuid": "90c0e945-ac7c-461d-a675-9e439b98f98e", 00:11:32.869 "is_configured": true, 00:11:32.869 "data_offset": 0, 00:11:32.869 "data_size": 65536 00:11:32.869 }, 00:11:32.869 { 00:11:32.869 "name": "BaseBdev2", 00:11:32.869 "uuid": "d4c7bab6-0d72-4200-ae59-fe8532ac41b2", 00:11:32.869 "is_configured": true, 00:11:32.869 "data_offset": 0, 00:11:32.869 "data_size": 65536 00:11:32.869 }, 00:11:32.869 { 00:11:32.869 "name": "BaseBdev3", 00:11:32.869 "uuid": "60c05c97-dff3-470a-b75d-cccb5d07e2cb", 00:11:32.869 "is_configured": true, 00:11:32.869 "data_offset": 0, 00:11:32.869 "data_size": 65536 00:11:32.869 } 00:11:32.869 ] 00:11:32.869 }' 00:11:32.869 16:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.869 16:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.128 16:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:33.128 16:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:33.128 16:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:33.128 16:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:33.128 16:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:33.128 16:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:33.128 16:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:33.128 16:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.128 16:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.128 16:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:33.128 [2024-11-05 16:25:46.156113] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:33.128 16:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.128 16:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:33.128 "name": "Existed_Raid", 00:11:33.128 "aliases": [ 00:11:33.128 "dfc74acc-26f4-46e4-906f-5a2b23e1f7c0" 00:11:33.128 ], 00:11:33.128 "product_name": "Raid Volume", 00:11:33.128 "block_size": 512, 00:11:33.128 "num_blocks": 65536, 00:11:33.128 "uuid": "dfc74acc-26f4-46e4-906f-5a2b23e1f7c0", 00:11:33.128 "assigned_rate_limits": { 00:11:33.128 "rw_ios_per_sec": 0, 00:11:33.128 "rw_mbytes_per_sec": 0, 00:11:33.128 "r_mbytes_per_sec": 0, 00:11:33.128 "w_mbytes_per_sec": 0 00:11:33.128 }, 00:11:33.128 "claimed": false, 00:11:33.128 "zoned": false, 00:11:33.128 "supported_io_types": { 00:11:33.128 "read": true, 00:11:33.128 "write": true, 00:11:33.128 "unmap": false, 00:11:33.128 "flush": false, 00:11:33.128 "reset": true, 00:11:33.128 "nvme_admin": false, 00:11:33.128 "nvme_io": false, 00:11:33.128 "nvme_io_md": false, 00:11:33.128 "write_zeroes": true, 00:11:33.128 "zcopy": false, 00:11:33.128 "get_zone_info": false, 00:11:33.128 "zone_management": false, 00:11:33.128 "zone_append": false, 00:11:33.128 "compare": false, 00:11:33.128 "compare_and_write": false, 00:11:33.128 "abort": false, 00:11:33.128 "seek_hole": false, 00:11:33.128 "seek_data": false, 00:11:33.128 "copy": false, 00:11:33.128 "nvme_iov_md": false 00:11:33.128 }, 00:11:33.128 "memory_domains": [ 00:11:33.128 { 00:11:33.128 "dma_device_id": "system", 00:11:33.128 "dma_device_type": 1 00:11:33.128 }, 00:11:33.128 { 00:11:33.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.128 "dma_device_type": 2 00:11:33.128 }, 00:11:33.128 { 00:11:33.128 "dma_device_id": "system", 00:11:33.128 "dma_device_type": 1 00:11:33.128 }, 00:11:33.128 { 00:11:33.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.128 "dma_device_type": 2 00:11:33.128 }, 00:11:33.128 { 00:11:33.128 "dma_device_id": "system", 00:11:33.128 "dma_device_type": 1 00:11:33.128 }, 00:11:33.128 { 00:11:33.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.128 "dma_device_type": 2 00:11:33.129 } 00:11:33.129 ], 00:11:33.129 "driver_specific": { 00:11:33.129 "raid": { 00:11:33.129 "uuid": "dfc74acc-26f4-46e4-906f-5a2b23e1f7c0", 00:11:33.129 "strip_size_kb": 0, 00:11:33.129 "state": "online", 00:11:33.129 "raid_level": "raid1", 00:11:33.129 "superblock": false, 00:11:33.129 "num_base_bdevs": 3, 00:11:33.129 "num_base_bdevs_discovered": 3, 00:11:33.129 "num_base_bdevs_operational": 3, 00:11:33.129 "base_bdevs_list": [ 00:11:33.129 { 00:11:33.129 "name": "NewBaseBdev", 00:11:33.129 "uuid": "90c0e945-ac7c-461d-a675-9e439b98f98e", 00:11:33.129 "is_configured": true, 00:11:33.129 "data_offset": 0, 00:11:33.129 "data_size": 65536 00:11:33.129 }, 00:11:33.129 { 00:11:33.129 "name": "BaseBdev2", 00:11:33.129 "uuid": "d4c7bab6-0d72-4200-ae59-fe8532ac41b2", 00:11:33.129 "is_configured": true, 00:11:33.129 "data_offset": 0, 00:11:33.129 "data_size": 65536 00:11:33.129 }, 00:11:33.129 { 00:11:33.129 "name": "BaseBdev3", 00:11:33.129 "uuid": "60c05c97-dff3-470a-b75d-cccb5d07e2cb", 00:11:33.129 "is_configured": true, 00:11:33.129 "data_offset": 0, 00:11:33.129 "data_size": 65536 00:11:33.129 } 00:11:33.129 ] 00:11:33.129 } 00:11:33.129 } 00:11:33.129 }' 00:11:33.129 16:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:33.389 16:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:33.389 BaseBdev2 00:11:33.389 BaseBdev3' 00:11:33.389 16:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.389 16:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:33.389 16:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.389 16:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.389 16:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:33.389 16:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.389 16:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.389 16:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.389 16:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.389 16:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.389 16:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.389 16:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:33.389 16:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.389 16:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.389 16:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.389 16:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.389 16:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.389 16:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.389 16:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.389 16:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:33.389 16:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.389 16:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.389 16:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.389 16:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.647 16:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.647 16:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.647 16:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:33.648 16:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.648 16:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.648 [2024-11-05 16:25:46.487223] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:33.648 [2024-11-05 16:25:46.487269] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:33.648 [2024-11-05 16:25:46.487372] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:33.648 [2024-11-05 16:25:46.487716] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:33.648 [2024-11-05 16:25:46.487736] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:33.648 16:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.648 16:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67665 00:11:33.648 16:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 67665 ']' 00:11:33.648 16:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 67665 00:11:33.648 16:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:11:33.648 16:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:33.648 16:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67665 00:11:33.648 killing process with pid 67665 00:11:33.648 16:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:33.648 16:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:33.648 16:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67665' 00:11:33.648 16:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 67665 00:11:33.648 [2024-11-05 16:25:46.532038] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:33.648 16:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 67665 00:11:33.907 [2024-11-05 16:25:46.880705] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:35.288 16:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:35.288 00:11:35.288 real 0m11.526s 00:11:35.288 user 0m18.254s 00:11:35.288 sys 0m1.982s 00:11:35.288 16:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:35.288 16:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.288 ************************************ 00:11:35.288 END TEST raid_state_function_test 00:11:35.288 ************************************ 00:11:35.288 16:25:48 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:11:35.288 16:25:48 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:35.288 16:25:48 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:35.288 16:25:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:35.288 ************************************ 00:11:35.288 START TEST raid_state_function_test_sb 00:11:35.288 ************************************ 00:11:35.288 16:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 3 true 00:11:35.288 16:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:35.288 16:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:35.288 16:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:35.288 16:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:35.288 16:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:35.288 16:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:35.288 16:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:35.288 16:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:35.288 16:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:35.288 16:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:35.288 16:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:35.288 16:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:35.288 16:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:35.288 16:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:35.288 16:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:35.288 16:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:35.288 16:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:35.288 16:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:35.288 16:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:35.288 16:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:35.288 16:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:35.288 16:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:35.288 16:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:35.288 16:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:35.288 16:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:35.288 16:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:35.288 16:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68292 00:11:35.288 16:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68292' 00:11:35.288 Process raid pid: 68292 00:11:35.288 16:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68292 00:11:35.288 16:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 68292 ']' 00:11:35.288 16:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.288 16:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:35.288 16:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.288 16:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:35.288 16:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.288 [2024-11-05 16:25:48.319414] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:11:35.288 [2024-11-05 16:25:48.319817] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.547 [2024-11-05 16:25:48.504200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.547 [2024-11-05 16:25:48.634401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.807 [2024-11-05 16:25:48.860879] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.807 [2024-11-05 16:25:48.860914] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:36.375 16:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:36.375 16:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:11:36.375 16:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:36.375 16:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.375 16:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.375 [2024-11-05 16:25:49.202233] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:36.375 [2024-11-05 16:25:49.202298] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:36.375 [2024-11-05 16:25:49.202314] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:36.375 [2024-11-05 16:25:49.202344] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:36.375 [2024-11-05 16:25:49.202352] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:36.375 [2024-11-05 16:25:49.202363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:36.375 16:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.375 16:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:36.375 16:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.375 16:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.375 16:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.375 16:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.375 16:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:36.375 16:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.375 16:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.375 16:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.375 16:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.375 16:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.376 16:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.376 16:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.376 16:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.376 16:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.376 16:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.376 "name": "Existed_Raid", 00:11:36.376 "uuid": "aeefbc43-c5b3-4b07-ae9d-5b0c53486dbd", 00:11:36.376 "strip_size_kb": 0, 00:11:36.376 "state": "configuring", 00:11:36.376 "raid_level": "raid1", 00:11:36.376 "superblock": true, 00:11:36.376 "num_base_bdevs": 3, 00:11:36.376 "num_base_bdevs_discovered": 0, 00:11:36.376 "num_base_bdevs_operational": 3, 00:11:36.376 "base_bdevs_list": [ 00:11:36.376 { 00:11:36.376 "name": "BaseBdev1", 00:11:36.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.376 "is_configured": false, 00:11:36.376 "data_offset": 0, 00:11:36.376 "data_size": 0 00:11:36.376 }, 00:11:36.376 { 00:11:36.376 "name": "BaseBdev2", 00:11:36.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.376 "is_configured": false, 00:11:36.376 "data_offset": 0, 00:11:36.376 "data_size": 0 00:11:36.376 }, 00:11:36.376 { 00:11:36.376 "name": "BaseBdev3", 00:11:36.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.376 "is_configured": false, 00:11:36.376 "data_offset": 0, 00:11:36.376 "data_size": 0 00:11:36.376 } 00:11:36.376 ] 00:11:36.376 }' 00:11:36.376 16:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.376 16:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.636 16:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:36.636 16:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.636 16:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.636 [2024-11-05 16:25:49.653447] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:36.636 [2024-11-05 16:25:49.653554] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:36.636 16:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.636 16:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:36.636 16:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.636 16:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.636 [2024-11-05 16:25:49.665436] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:36.636 [2024-11-05 16:25:49.665546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:36.636 [2024-11-05 16:25:49.665589] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:36.636 [2024-11-05 16:25:49.665620] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:36.636 [2024-11-05 16:25:49.665651] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:36.636 [2024-11-05 16:25:49.665678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:36.636 16:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.636 16:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:36.636 16:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.636 16:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.636 [2024-11-05 16:25:49.718574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:36.636 BaseBdev1 00:11:36.636 16:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.636 16:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:36.636 16:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:36.636 16:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:36.636 16:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:36.636 16:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:36.636 16:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:36.636 16:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:36.636 16:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.636 16:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.895 16:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.895 16:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:36.895 16:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.895 16:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.895 [ 00:11:36.895 { 00:11:36.895 "name": "BaseBdev1", 00:11:36.895 "aliases": [ 00:11:36.895 "3e2f0429-04b9-4c0f-b3af-6adc6ad96476" 00:11:36.895 ], 00:11:36.895 "product_name": "Malloc disk", 00:11:36.895 "block_size": 512, 00:11:36.895 "num_blocks": 65536, 00:11:36.895 "uuid": "3e2f0429-04b9-4c0f-b3af-6adc6ad96476", 00:11:36.895 "assigned_rate_limits": { 00:11:36.895 "rw_ios_per_sec": 0, 00:11:36.895 "rw_mbytes_per_sec": 0, 00:11:36.895 "r_mbytes_per_sec": 0, 00:11:36.895 "w_mbytes_per_sec": 0 00:11:36.895 }, 00:11:36.895 "claimed": true, 00:11:36.895 "claim_type": "exclusive_write", 00:11:36.895 "zoned": false, 00:11:36.895 "supported_io_types": { 00:11:36.895 "read": true, 00:11:36.895 "write": true, 00:11:36.895 "unmap": true, 00:11:36.895 "flush": true, 00:11:36.895 "reset": true, 00:11:36.895 "nvme_admin": false, 00:11:36.895 "nvme_io": false, 00:11:36.895 "nvme_io_md": false, 00:11:36.895 "write_zeroes": true, 00:11:36.895 "zcopy": true, 00:11:36.895 "get_zone_info": false, 00:11:36.895 "zone_management": false, 00:11:36.895 "zone_append": false, 00:11:36.895 "compare": false, 00:11:36.895 "compare_and_write": false, 00:11:36.895 "abort": true, 00:11:36.895 "seek_hole": false, 00:11:36.895 "seek_data": false, 00:11:36.895 "copy": true, 00:11:36.895 "nvme_iov_md": false 00:11:36.895 }, 00:11:36.895 "memory_domains": [ 00:11:36.895 { 00:11:36.895 "dma_device_id": "system", 00:11:36.895 "dma_device_type": 1 00:11:36.895 }, 00:11:36.895 { 00:11:36.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.895 "dma_device_type": 2 00:11:36.895 } 00:11:36.895 ], 00:11:36.895 "driver_specific": {} 00:11:36.895 } 00:11:36.895 ] 00:11:36.895 16:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.895 16:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:36.895 16:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:36.895 16:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.895 16:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.895 16:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.895 16:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.895 16:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:36.895 16:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.895 16:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.895 16:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.895 16:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.895 16:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.895 16:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.895 16:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.895 16:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.895 16:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.895 16:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.895 "name": "Existed_Raid", 00:11:36.895 "uuid": "0143caa1-96f6-427e-b310-1cb5bcba767f", 00:11:36.895 "strip_size_kb": 0, 00:11:36.895 "state": "configuring", 00:11:36.895 "raid_level": "raid1", 00:11:36.895 "superblock": true, 00:11:36.895 "num_base_bdevs": 3, 00:11:36.895 "num_base_bdevs_discovered": 1, 00:11:36.895 "num_base_bdevs_operational": 3, 00:11:36.895 "base_bdevs_list": [ 00:11:36.895 { 00:11:36.895 "name": "BaseBdev1", 00:11:36.895 "uuid": "3e2f0429-04b9-4c0f-b3af-6adc6ad96476", 00:11:36.895 "is_configured": true, 00:11:36.896 "data_offset": 2048, 00:11:36.896 "data_size": 63488 00:11:36.896 }, 00:11:36.896 { 00:11:36.896 "name": "BaseBdev2", 00:11:36.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.896 "is_configured": false, 00:11:36.896 "data_offset": 0, 00:11:36.896 "data_size": 0 00:11:36.896 }, 00:11:36.896 { 00:11:36.896 "name": "BaseBdev3", 00:11:36.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.896 "is_configured": false, 00:11:36.896 "data_offset": 0, 00:11:36.896 "data_size": 0 00:11:36.896 } 00:11:36.896 ] 00:11:36.896 }' 00:11:36.896 16:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.896 16:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.155 16:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:37.155 16:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.155 16:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.155 [2024-11-05 16:25:50.237810] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:37.155 [2024-11-05 16:25:50.237945] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:37.155 16:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.155 16:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:37.155 16:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.155 16:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.419 [2024-11-05 16:25:50.249887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:37.419 [2024-11-05 16:25:50.251998] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:37.419 [2024-11-05 16:25:50.252054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:37.419 [2024-11-05 16:25:50.252066] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:37.419 [2024-11-05 16:25:50.252076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:37.419 16:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.419 16:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:37.419 16:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:37.419 16:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:37.419 16:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.419 16:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.419 16:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.419 16:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.419 16:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:37.419 16:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.419 16:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.419 16:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.419 16:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.419 16:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.419 16:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.419 16:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.419 16:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.419 16:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.419 16:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.419 "name": "Existed_Raid", 00:11:37.419 "uuid": "286a0202-c4b3-42dc-9252-ef9bda9abbda", 00:11:37.419 "strip_size_kb": 0, 00:11:37.419 "state": "configuring", 00:11:37.419 "raid_level": "raid1", 00:11:37.419 "superblock": true, 00:11:37.419 "num_base_bdevs": 3, 00:11:37.419 "num_base_bdevs_discovered": 1, 00:11:37.419 "num_base_bdevs_operational": 3, 00:11:37.419 "base_bdevs_list": [ 00:11:37.419 { 00:11:37.419 "name": "BaseBdev1", 00:11:37.419 "uuid": "3e2f0429-04b9-4c0f-b3af-6adc6ad96476", 00:11:37.419 "is_configured": true, 00:11:37.419 "data_offset": 2048, 00:11:37.419 "data_size": 63488 00:11:37.419 }, 00:11:37.419 { 00:11:37.419 "name": "BaseBdev2", 00:11:37.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.419 "is_configured": false, 00:11:37.419 "data_offset": 0, 00:11:37.419 "data_size": 0 00:11:37.419 }, 00:11:37.419 { 00:11:37.419 "name": "BaseBdev3", 00:11:37.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.419 "is_configured": false, 00:11:37.419 "data_offset": 0, 00:11:37.419 "data_size": 0 00:11:37.419 } 00:11:37.419 ] 00:11:37.419 }' 00:11:37.419 16:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.419 16:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.678 16:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:37.678 16:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.678 16:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.938 [2024-11-05 16:25:50.775013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:37.938 BaseBdev2 00:11:37.938 16:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.938 16:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:37.938 16:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:37.938 16:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:37.938 16:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:37.938 16:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:37.938 16:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:37.938 16:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:37.938 16:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.938 16:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.938 16:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.938 16:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:37.938 16:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.938 16:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.938 [ 00:11:37.938 { 00:11:37.938 "name": "BaseBdev2", 00:11:37.938 "aliases": [ 00:11:37.938 "ca0df799-25e0-495c-8983-fb7bc2249184" 00:11:37.938 ], 00:11:37.938 "product_name": "Malloc disk", 00:11:37.938 "block_size": 512, 00:11:37.938 "num_blocks": 65536, 00:11:37.938 "uuid": "ca0df799-25e0-495c-8983-fb7bc2249184", 00:11:37.938 "assigned_rate_limits": { 00:11:37.938 "rw_ios_per_sec": 0, 00:11:37.938 "rw_mbytes_per_sec": 0, 00:11:37.938 "r_mbytes_per_sec": 0, 00:11:37.938 "w_mbytes_per_sec": 0 00:11:37.938 }, 00:11:37.938 "claimed": true, 00:11:37.938 "claim_type": "exclusive_write", 00:11:37.938 "zoned": false, 00:11:37.938 "supported_io_types": { 00:11:37.938 "read": true, 00:11:37.938 "write": true, 00:11:37.938 "unmap": true, 00:11:37.938 "flush": true, 00:11:37.938 "reset": true, 00:11:37.938 "nvme_admin": false, 00:11:37.938 "nvme_io": false, 00:11:37.938 "nvme_io_md": false, 00:11:37.938 "write_zeroes": true, 00:11:37.938 "zcopy": true, 00:11:37.938 "get_zone_info": false, 00:11:37.938 "zone_management": false, 00:11:37.938 "zone_append": false, 00:11:37.938 "compare": false, 00:11:37.938 "compare_and_write": false, 00:11:37.938 "abort": true, 00:11:37.938 "seek_hole": false, 00:11:37.938 "seek_data": false, 00:11:37.938 "copy": true, 00:11:37.938 "nvme_iov_md": false 00:11:37.938 }, 00:11:37.938 "memory_domains": [ 00:11:37.938 { 00:11:37.938 "dma_device_id": "system", 00:11:37.938 "dma_device_type": 1 00:11:37.938 }, 00:11:37.938 { 00:11:37.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.938 "dma_device_type": 2 00:11:37.938 } 00:11:37.938 ], 00:11:37.938 "driver_specific": {} 00:11:37.938 } 00:11:37.938 ] 00:11:37.938 16:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.938 16:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:37.938 16:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:37.938 16:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:37.938 16:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:37.938 16:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.938 16:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.938 16:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.938 16:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.938 16:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:37.938 16:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.938 16:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.938 16:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.938 16:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.938 16:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.938 16:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.938 16:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.938 16:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.938 16:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.938 16:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.938 "name": "Existed_Raid", 00:11:37.938 "uuid": "286a0202-c4b3-42dc-9252-ef9bda9abbda", 00:11:37.938 "strip_size_kb": 0, 00:11:37.938 "state": "configuring", 00:11:37.938 "raid_level": "raid1", 00:11:37.938 "superblock": true, 00:11:37.938 "num_base_bdevs": 3, 00:11:37.938 "num_base_bdevs_discovered": 2, 00:11:37.938 "num_base_bdevs_operational": 3, 00:11:37.938 "base_bdevs_list": [ 00:11:37.938 { 00:11:37.938 "name": "BaseBdev1", 00:11:37.938 "uuid": "3e2f0429-04b9-4c0f-b3af-6adc6ad96476", 00:11:37.938 "is_configured": true, 00:11:37.938 "data_offset": 2048, 00:11:37.938 "data_size": 63488 00:11:37.938 }, 00:11:37.938 { 00:11:37.938 "name": "BaseBdev2", 00:11:37.938 "uuid": "ca0df799-25e0-495c-8983-fb7bc2249184", 00:11:37.938 "is_configured": true, 00:11:37.938 "data_offset": 2048, 00:11:37.938 "data_size": 63488 00:11:37.938 }, 00:11:37.938 { 00:11:37.938 "name": "BaseBdev3", 00:11:37.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.938 "is_configured": false, 00:11:37.938 "data_offset": 0, 00:11:37.938 "data_size": 0 00:11:37.938 } 00:11:37.938 ] 00:11:37.938 }' 00:11:37.938 16:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.938 16:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.507 16:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:38.507 16:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.507 16:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.507 [2024-11-05 16:25:51.345233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:38.507 [2024-11-05 16:25:51.345706] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:38.507 [2024-11-05 16:25:51.345738] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:38.507 [2024-11-05 16:25:51.346058] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:38.507 [2024-11-05 16:25:51.346233] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:38.507 [2024-11-05 16:25:51.346244] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:38.507 BaseBdev3 00:11:38.507 [2024-11-05 16:25:51.346411] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.507 16:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.507 16:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:38.507 16:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:38.507 16:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:38.507 16:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:38.507 16:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:38.507 16:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:38.507 16:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:38.507 16:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.507 16:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.507 16:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.507 16:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:38.507 16:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.507 16:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.507 [ 00:11:38.507 { 00:11:38.507 "name": "BaseBdev3", 00:11:38.507 "aliases": [ 00:11:38.507 "c0f575bc-6b72-49aa-9088-4129d11ba7d8" 00:11:38.507 ], 00:11:38.507 "product_name": "Malloc disk", 00:11:38.507 "block_size": 512, 00:11:38.507 "num_blocks": 65536, 00:11:38.507 "uuid": "c0f575bc-6b72-49aa-9088-4129d11ba7d8", 00:11:38.507 "assigned_rate_limits": { 00:11:38.507 "rw_ios_per_sec": 0, 00:11:38.507 "rw_mbytes_per_sec": 0, 00:11:38.507 "r_mbytes_per_sec": 0, 00:11:38.507 "w_mbytes_per_sec": 0 00:11:38.507 }, 00:11:38.507 "claimed": true, 00:11:38.507 "claim_type": "exclusive_write", 00:11:38.507 "zoned": false, 00:11:38.507 "supported_io_types": { 00:11:38.507 "read": true, 00:11:38.507 "write": true, 00:11:38.507 "unmap": true, 00:11:38.507 "flush": true, 00:11:38.507 "reset": true, 00:11:38.507 "nvme_admin": false, 00:11:38.507 "nvme_io": false, 00:11:38.507 "nvme_io_md": false, 00:11:38.507 "write_zeroes": true, 00:11:38.507 "zcopy": true, 00:11:38.507 "get_zone_info": false, 00:11:38.507 "zone_management": false, 00:11:38.507 "zone_append": false, 00:11:38.507 "compare": false, 00:11:38.507 "compare_and_write": false, 00:11:38.507 "abort": true, 00:11:38.507 "seek_hole": false, 00:11:38.507 "seek_data": false, 00:11:38.507 "copy": true, 00:11:38.507 "nvme_iov_md": false 00:11:38.507 }, 00:11:38.507 "memory_domains": [ 00:11:38.507 { 00:11:38.507 "dma_device_id": "system", 00:11:38.507 "dma_device_type": 1 00:11:38.507 }, 00:11:38.507 { 00:11:38.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.507 "dma_device_type": 2 00:11:38.507 } 00:11:38.507 ], 00:11:38.507 "driver_specific": {} 00:11:38.507 } 00:11:38.507 ] 00:11:38.507 16:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.507 16:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:38.507 16:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:38.507 16:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:38.507 16:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:38.507 16:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.507 16:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:38.507 16:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.507 16:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.507 16:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:38.507 16:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.507 16:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.507 16:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.507 16:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.507 16:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.507 16:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.507 16:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.507 16:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.507 16:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.507 16:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.507 "name": "Existed_Raid", 00:11:38.507 "uuid": "286a0202-c4b3-42dc-9252-ef9bda9abbda", 00:11:38.507 "strip_size_kb": 0, 00:11:38.507 "state": "online", 00:11:38.507 "raid_level": "raid1", 00:11:38.507 "superblock": true, 00:11:38.507 "num_base_bdevs": 3, 00:11:38.507 "num_base_bdevs_discovered": 3, 00:11:38.507 "num_base_bdevs_operational": 3, 00:11:38.507 "base_bdevs_list": [ 00:11:38.507 { 00:11:38.507 "name": "BaseBdev1", 00:11:38.507 "uuid": "3e2f0429-04b9-4c0f-b3af-6adc6ad96476", 00:11:38.507 "is_configured": true, 00:11:38.507 "data_offset": 2048, 00:11:38.507 "data_size": 63488 00:11:38.507 }, 00:11:38.507 { 00:11:38.507 "name": "BaseBdev2", 00:11:38.507 "uuid": "ca0df799-25e0-495c-8983-fb7bc2249184", 00:11:38.507 "is_configured": true, 00:11:38.507 "data_offset": 2048, 00:11:38.507 "data_size": 63488 00:11:38.507 }, 00:11:38.507 { 00:11:38.507 "name": "BaseBdev3", 00:11:38.507 "uuid": "c0f575bc-6b72-49aa-9088-4129d11ba7d8", 00:11:38.507 "is_configured": true, 00:11:38.507 "data_offset": 2048, 00:11:38.507 "data_size": 63488 00:11:38.507 } 00:11:38.507 ] 00:11:38.507 }' 00:11:38.507 16:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.507 16:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.076 16:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:39.076 16:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:39.076 16:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:39.076 16:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:39.076 16:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:39.076 16:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:39.076 16:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:39.076 16:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:39.076 16:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.076 16:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.076 [2024-11-05 16:25:51.876973] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:39.076 16:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.076 16:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:39.076 "name": "Existed_Raid", 00:11:39.076 "aliases": [ 00:11:39.076 "286a0202-c4b3-42dc-9252-ef9bda9abbda" 00:11:39.076 ], 00:11:39.076 "product_name": "Raid Volume", 00:11:39.076 "block_size": 512, 00:11:39.076 "num_blocks": 63488, 00:11:39.076 "uuid": "286a0202-c4b3-42dc-9252-ef9bda9abbda", 00:11:39.076 "assigned_rate_limits": { 00:11:39.076 "rw_ios_per_sec": 0, 00:11:39.076 "rw_mbytes_per_sec": 0, 00:11:39.076 "r_mbytes_per_sec": 0, 00:11:39.076 "w_mbytes_per_sec": 0 00:11:39.076 }, 00:11:39.076 "claimed": false, 00:11:39.076 "zoned": false, 00:11:39.076 "supported_io_types": { 00:11:39.076 "read": true, 00:11:39.076 "write": true, 00:11:39.076 "unmap": false, 00:11:39.076 "flush": false, 00:11:39.076 "reset": true, 00:11:39.076 "nvme_admin": false, 00:11:39.076 "nvme_io": false, 00:11:39.076 "nvme_io_md": false, 00:11:39.076 "write_zeroes": true, 00:11:39.076 "zcopy": false, 00:11:39.076 "get_zone_info": false, 00:11:39.076 "zone_management": false, 00:11:39.076 "zone_append": false, 00:11:39.076 "compare": false, 00:11:39.076 "compare_and_write": false, 00:11:39.076 "abort": false, 00:11:39.076 "seek_hole": false, 00:11:39.076 "seek_data": false, 00:11:39.076 "copy": false, 00:11:39.076 "nvme_iov_md": false 00:11:39.076 }, 00:11:39.076 "memory_domains": [ 00:11:39.076 { 00:11:39.076 "dma_device_id": "system", 00:11:39.076 "dma_device_type": 1 00:11:39.076 }, 00:11:39.076 { 00:11:39.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.076 "dma_device_type": 2 00:11:39.076 }, 00:11:39.076 { 00:11:39.076 "dma_device_id": "system", 00:11:39.076 "dma_device_type": 1 00:11:39.076 }, 00:11:39.076 { 00:11:39.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.076 "dma_device_type": 2 00:11:39.076 }, 00:11:39.076 { 00:11:39.076 "dma_device_id": "system", 00:11:39.076 "dma_device_type": 1 00:11:39.076 }, 00:11:39.076 { 00:11:39.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.076 "dma_device_type": 2 00:11:39.076 } 00:11:39.076 ], 00:11:39.076 "driver_specific": { 00:11:39.076 "raid": { 00:11:39.076 "uuid": "286a0202-c4b3-42dc-9252-ef9bda9abbda", 00:11:39.076 "strip_size_kb": 0, 00:11:39.076 "state": "online", 00:11:39.076 "raid_level": "raid1", 00:11:39.076 "superblock": true, 00:11:39.076 "num_base_bdevs": 3, 00:11:39.076 "num_base_bdevs_discovered": 3, 00:11:39.076 "num_base_bdevs_operational": 3, 00:11:39.076 "base_bdevs_list": [ 00:11:39.076 { 00:11:39.076 "name": "BaseBdev1", 00:11:39.076 "uuid": "3e2f0429-04b9-4c0f-b3af-6adc6ad96476", 00:11:39.076 "is_configured": true, 00:11:39.076 "data_offset": 2048, 00:11:39.076 "data_size": 63488 00:11:39.076 }, 00:11:39.076 { 00:11:39.076 "name": "BaseBdev2", 00:11:39.076 "uuid": "ca0df799-25e0-495c-8983-fb7bc2249184", 00:11:39.076 "is_configured": true, 00:11:39.076 "data_offset": 2048, 00:11:39.076 "data_size": 63488 00:11:39.076 }, 00:11:39.076 { 00:11:39.076 "name": "BaseBdev3", 00:11:39.076 "uuid": "c0f575bc-6b72-49aa-9088-4129d11ba7d8", 00:11:39.076 "is_configured": true, 00:11:39.076 "data_offset": 2048, 00:11:39.076 "data_size": 63488 00:11:39.076 } 00:11:39.076 ] 00:11:39.076 } 00:11:39.076 } 00:11:39.076 }' 00:11:39.076 16:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:39.076 16:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:39.076 BaseBdev2 00:11:39.076 BaseBdev3' 00:11:39.076 16:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.076 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:39.076 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.076 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.076 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:39.076 16:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.076 16:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.076 16:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.076 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.076 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.076 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.076 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.076 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:39.076 16:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.076 16:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.076 16:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.076 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.076 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.076 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.077 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:39.077 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.077 16:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.077 16:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.077 16:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.077 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.077 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.077 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:39.077 16:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.077 16:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.077 [2024-11-05 16:25:52.152192] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:39.336 16:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.336 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:39.336 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:39.336 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:39.336 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:39.336 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:39.336 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:39.336 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.336 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:39.336 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.336 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.336 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:39.336 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.336 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.336 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.336 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.336 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.336 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.336 16:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.336 16:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.336 16:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.336 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.336 "name": "Existed_Raid", 00:11:39.336 "uuid": "286a0202-c4b3-42dc-9252-ef9bda9abbda", 00:11:39.336 "strip_size_kb": 0, 00:11:39.336 "state": "online", 00:11:39.336 "raid_level": "raid1", 00:11:39.336 "superblock": true, 00:11:39.336 "num_base_bdevs": 3, 00:11:39.336 "num_base_bdevs_discovered": 2, 00:11:39.336 "num_base_bdevs_operational": 2, 00:11:39.336 "base_bdevs_list": [ 00:11:39.336 { 00:11:39.336 "name": null, 00:11:39.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.336 "is_configured": false, 00:11:39.336 "data_offset": 0, 00:11:39.336 "data_size": 63488 00:11:39.336 }, 00:11:39.336 { 00:11:39.336 "name": "BaseBdev2", 00:11:39.336 "uuid": "ca0df799-25e0-495c-8983-fb7bc2249184", 00:11:39.336 "is_configured": true, 00:11:39.336 "data_offset": 2048, 00:11:39.336 "data_size": 63488 00:11:39.336 }, 00:11:39.336 { 00:11:39.336 "name": "BaseBdev3", 00:11:39.336 "uuid": "c0f575bc-6b72-49aa-9088-4129d11ba7d8", 00:11:39.336 "is_configured": true, 00:11:39.336 "data_offset": 2048, 00:11:39.336 "data_size": 63488 00:11:39.336 } 00:11:39.336 ] 00:11:39.336 }' 00:11:39.336 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.336 16:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.903 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:39.903 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:39.903 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.903 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:39.903 16:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.903 16:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.903 16:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.903 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:39.903 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:39.903 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:39.903 16:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.903 16:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.903 [2024-11-05 16:25:52.787951] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:39.903 16:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.903 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:39.903 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:39.903 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.903 16:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.903 16:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.903 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:39.903 16:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.903 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:39.903 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:39.903 16:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:39.903 16:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.903 16:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.903 [2024-11-05 16:25:52.954420] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:39.903 [2024-11-05 16:25:52.954622] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:40.163 [2024-11-05 16:25:53.062612] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:40.163 [2024-11-05 16:25:53.062758] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:40.163 [2024-11-05 16:25:53.062809] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:40.163 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.163 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:40.163 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:40.163 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.163 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:40.163 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.163 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.163 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.163 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:40.163 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:40.163 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:40.163 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:40.163 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:40.163 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:40.163 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.163 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.163 BaseBdev2 00:11:40.163 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.163 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:40.163 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:40.163 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:40.163 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:40.163 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:40.163 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:40.163 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:40.163 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.163 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.163 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.163 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:40.163 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.163 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.163 [ 00:11:40.163 { 00:11:40.163 "name": "BaseBdev2", 00:11:40.163 "aliases": [ 00:11:40.163 "a3acbd0a-05a4-48c0-96a0-bb7d47028afe" 00:11:40.163 ], 00:11:40.163 "product_name": "Malloc disk", 00:11:40.163 "block_size": 512, 00:11:40.163 "num_blocks": 65536, 00:11:40.163 "uuid": "a3acbd0a-05a4-48c0-96a0-bb7d47028afe", 00:11:40.163 "assigned_rate_limits": { 00:11:40.163 "rw_ios_per_sec": 0, 00:11:40.163 "rw_mbytes_per_sec": 0, 00:11:40.163 "r_mbytes_per_sec": 0, 00:11:40.163 "w_mbytes_per_sec": 0 00:11:40.163 }, 00:11:40.163 "claimed": false, 00:11:40.163 "zoned": false, 00:11:40.163 "supported_io_types": { 00:11:40.163 "read": true, 00:11:40.163 "write": true, 00:11:40.163 "unmap": true, 00:11:40.163 "flush": true, 00:11:40.163 "reset": true, 00:11:40.163 "nvme_admin": false, 00:11:40.163 "nvme_io": false, 00:11:40.163 "nvme_io_md": false, 00:11:40.163 "write_zeroes": true, 00:11:40.163 "zcopy": true, 00:11:40.164 "get_zone_info": false, 00:11:40.164 "zone_management": false, 00:11:40.164 "zone_append": false, 00:11:40.164 "compare": false, 00:11:40.164 "compare_and_write": false, 00:11:40.164 "abort": true, 00:11:40.164 "seek_hole": false, 00:11:40.164 "seek_data": false, 00:11:40.164 "copy": true, 00:11:40.164 "nvme_iov_md": false 00:11:40.164 }, 00:11:40.164 "memory_domains": [ 00:11:40.164 { 00:11:40.164 "dma_device_id": "system", 00:11:40.164 "dma_device_type": 1 00:11:40.164 }, 00:11:40.164 { 00:11:40.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.164 "dma_device_type": 2 00:11:40.164 } 00:11:40.164 ], 00:11:40.164 "driver_specific": {} 00:11:40.164 } 00:11:40.164 ] 00:11:40.164 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.164 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:40.164 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:40.164 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:40.164 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:40.164 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.164 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.164 BaseBdev3 00:11:40.164 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.164 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:40.164 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:40.164 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:40.164 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:40.164 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:40.164 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:40.164 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:40.164 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.164 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.423 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.423 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:40.423 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.423 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.423 [ 00:11:40.423 { 00:11:40.423 "name": "BaseBdev3", 00:11:40.423 "aliases": [ 00:11:40.423 "68b425fa-02fa-4687-b414-fdfaf168ccc4" 00:11:40.423 ], 00:11:40.423 "product_name": "Malloc disk", 00:11:40.423 "block_size": 512, 00:11:40.423 "num_blocks": 65536, 00:11:40.423 "uuid": "68b425fa-02fa-4687-b414-fdfaf168ccc4", 00:11:40.423 "assigned_rate_limits": { 00:11:40.423 "rw_ios_per_sec": 0, 00:11:40.423 "rw_mbytes_per_sec": 0, 00:11:40.423 "r_mbytes_per_sec": 0, 00:11:40.423 "w_mbytes_per_sec": 0 00:11:40.423 }, 00:11:40.423 "claimed": false, 00:11:40.423 "zoned": false, 00:11:40.423 "supported_io_types": { 00:11:40.423 "read": true, 00:11:40.423 "write": true, 00:11:40.423 "unmap": true, 00:11:40.423 "flush": true, 00:11:40.423 "reset": true, 00:11:40.423 "nvme_admin": false, 00:11:40.423 "nvme_io": false, 00:11:40.423 "nvme_io_md": false, 00:11:40.423 "write_zeroes": true, 00:11:40.423 "zcopy": true, 00:11:40.423 "get_zone_info": false, 00:11:40.423 "zone_management": false, 00:11:40.423 "zone_append": false, 00:11:40.423 "compare": false, 00:11:40.423 "compare_and_write": false, 00:11:40.423 "abort": true, 00:11:40.423 "seek_hole": false, 00:11:40.423 "seek_data": false, 00:11:40.423 "copy": true, 00:11:40.423 "nvme_iov_md": false 00:11:40.423 }, 00:11:40.423 "memory_domains": [ 00:11:40.423 { 00:11:40.423 "dma_device_id": "system", 00:11:40.423 "dma_device_type": 1 00:11:40.423 }, 00:11:40.423 { 00:11:40.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.423 "dma_device_type": 2 00:11:40.423 } 00:11:40.423 ], 00:11:40.423 "driver_specific": {} 00:11:40.424 } 00:11:40.424 ] 00:11:40.424 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.424 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:40.424 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:40.424 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:40.424 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:40.424 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.424 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.424 [2024-11-05 16:25:53.291210] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:40.424 [2024-11-05 16:25:53.291309] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:40.424 [2024-11-05 16:25:53.291360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:40.424 [2024-11-05 16:25:53.293451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:40.424 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.424 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:40.424 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.424 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.424 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.424 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.424 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:40.424 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.424 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.424 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.424 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.424 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.424 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.424 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.424 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.424 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.424 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.424 "name": "Existed_Raid", 00:11:40.424 "uuid": "5416db10-9fb2-4c35-b002-361d0640768e", 00:11:40.424 "strip_size_kb": 0, 00:11:40.424 "state": "configuring", 00:11:40.424 "raid_level": "raid1", 00:11:40.424 "superblock": true, 00:11:40.424 "num_base_bdevs": 3, 00:11:40.424 "num_base_bdevs_discovered": 2, 00:11:40.424 "num_base_bdevs_operational": 3, 00:11:40.424 "base_bdevs_list": [ 00:11:40.424 { 00:11:40.424 "name": "BaseBdev1", 00:11:40.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.424 "is_configured": false, 00:11:40.424 "data_offset": 0, 00:11:40.424 "data_size": 0 00:11:40.424 }, 00:11:40.424 { 00:11:40.424 "name": "BaseBdev2", 00:11:40.424 "uuid": "a3acbd0a-05a4-48c0-96a0-bb7d47028afe", 00:11:40.424 "is_configured": true, 00:11:40.424 "data_offset": 2048, 00:11:40.424 "data_size": 63488 00:11:40.424 }, 00:11:40.424 { 00:11:40.424 "name": "BaseBdev3", 00:11:40.424 "uuid": "68b425fa-02fa-4687-b414-fdfaf168ccc4", 00:11:40.424 "is_configured": true, 00:11:40.424 "data_offset": 2048, 00:11:40.424 "data_size": 63488 00:11:40.424 } 00:11:40.424 ] 00:11:40.424 }' 00:11:40.424 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.424 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.683 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:40.683 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.683 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.683 [2024-11-05 16:25:53.762438] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:40.683 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.683 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:40.683 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.683 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.683 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.683 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.683 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:40.683 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.683 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.683 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.683 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.941 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.941 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.941 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.941 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.942 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.942 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.942 "name": "Existed_Raid", 00:11:40.942 "uuid": "5416db10-9fb2-4c35-b002-361d0640768e", 00:11:40.942 "strip_size_kb": 0, 00:11:40.942 "state": "configuring", 00:11:40.942 "raid_level": "raid1", 00:11:40.942 "superblock": true, 00:11:40.942 "num_base_bdevs": 3, 00:11:40.942 "num_base_bdevs_discovered": 1, 00:11:40.942 "num_base_bdevs_operational": 3, 00:11:40.942 "base_bdevs_list": [ 00:11:40.942 { 00:11:40.942 "name": "BaseBdev1", 00:11:40.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.942 "is_configured": false, 00:11:40.942 "data_offset": 0, 00:11:40.942 "data_size": 0 00:11:40.942 }, 00:11:40.942 { 00:11:40.942 "name": null, 00:11:40.942 "uuid": "a3acbd0a-05a4-48c0-96a0-bb7d47028afe", 00:11:40.942 "is_configured": false, 00:11:40.942 "data_offset": 0, 00:11:40.942 "data_size": 63488 00:11:40.942 }, 00:11:40.942 { 00:11:40.942 "name": "BaseBdev3", 00:11:40.942 "uuid": "68b425fa-02fa-4687-b414-fdfaf168ccc4", 00:11:40.942 "is_configured": true, 00:11:40.942 "data_offset": 2048, 00:11:40.942 "data_size": 63488 00:11:40.942 } 00:11:40.942 ] 00:11:40.942 }' 00:11:40.942 16:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.942 16:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.201 16:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.201 16:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.201 16:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:41.201 16:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.201 16:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.201 16:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:41.201 16:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:41.201 16:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.201 16:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.201 [2024-11-05 16:25:54.262403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:41.201 BaseBdev1 00:11:41.201 16:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.201 16:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:41.201 16:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:41.201 16:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:41.201 16:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:41.201 16:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:41.201 16:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:41.201 16:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:41.201 16:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.201 16:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.201 16:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.201 16:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:41.201 16:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.201 16:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.201 [ 00:11:41.201 { 00:11:41.201 "name": "BaseBdev1", 00:11:41.201 "aliases": [ 00:11:41.201 "c4997757-a25b-4e6f-ba13-a89d615472ba" 00:11:41.201 ], 00:11:41.201 "product_name": "Malloc disk", 00:11:41.201 "block_size": 512, 00:11:41.201 "num_blocks": 65536, 00:11:41.201 "uuid": "c4997757-a25b-4e6f-ba13-a89d615472ba", 00:11:41.201 "assigned_rate_limits": { 00:11:41.201 "rw_ios_per_sec": 0, 00:11:41.201 "rw_mbytes_per_sec": 0, 00:11:41.201 "r_mbytes_per_sec": 0, 00:11:41.460 "w_mbytes_per_sec": 0 00:11:41.460 }, 00:11:41.460 "claimed": true, 00:11:41.460 "claim_type": "exclusive_write", 00:11:41.460 "zoned": false, 00:11:41.460 "supported_io_types": { 00:11:41.460 "read": true, 00:11:41.460 "write": true, 00:11:41.460 "unmap": true, 00:11:41.460 "flush": true, 00:11:41.460 "reset": true, 00:11:41.460 "nvme_admin": false, 00:11:41.460 "nvme_io": false, 00:11:41.460 "nvme_io_md": false, 00:11:41.460 "write_zeroes": true, 00:11:41.460 "zcopy": true, 00:11:41.460 "get_zone_info": false, 00:11:41.460 "zone_management": false, 00:11:41.460 "zone_append": false, 00:11:41.460 "compare": false, 00:11:41.460 "compare_and_write": false, 00:11:41.460 "abort": true, 00:11:41.460 "seek_hole": false, 00:11:41.460 "seek_data": false, 00:11:41.460 "copy": true, 00:11:41.460 "nvme_iov_md": false 00:11:41.460 }, 00:11:41.460 "memory_domains": [ 00:11:41.460 { 00:11:41.460 "dma_device_id": "system", 00:11:41.460 "dma_device_type": 1 00:11:41.460 }, 00:11:41.460 { 00:11:41.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.460 "dma_device_type": 2 00:11:41.460 } 00:11:41.460 ], 00:11:41.460 "driver_specific": {} 00:11:41.460 } 00:11:41.460 ] 00:11:41.460 16:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.460 16:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:41.460 16:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:41.460 16:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.461 16:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.461 16:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.461 16:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.461 16:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:41.461 16:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.461 16:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.461 16:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.461 16:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.461 16:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.461 16:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.461 16:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.461 16:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.461 16:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.461 16:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.461 "name": "Existed_Raid", 00:11:41.461 "uuid": "5416db10-9fb2-4c35-b002-361d0640768e", 00:11:41.461 "strip_size_kb": 0, 00:11:41.461 "state": "configuring", 00:11:41.461 "raid_level": "raid1", 00:11:41.461 "superblock": true, 00:11:41.461 "num_base_bdevs": 3, 00:11:41.461 "num_base_bdevs_discovered": 2, 00:11:41.461 "num_base_bdevs_operational": 3, 00:11:41.461 "base_bdevs_list": [ 00:11:41.461 { 00:11:41.461 "name": "BaseBdev1", 00:11:41.461 "uuid": "c4997757-a25b-4e6f-ba13-a89d615472ba", 00:11:41.461 "is_configured": true, 00:11:41.461 "data_offset": 2048, 00:11:41.461 "data_size": 63488 00:11:41.461 }, 00:11:41.461 { 00:11:41.461 "name": null, 00:11:41.461 "uuid": "a3acbd0a-05a4-48c0-96a0-bb7d47028afe", 00:11:41.461 "is_configured": false, 00:11:41.461 "data_offset": 0, 00:11:41.461 "data_size": 63488 00:11:41.461 }, 00:11:41.461 { 00:11:41.461 "name": "BaseBdev3", 00:11:41.461 "uuid": "68b425fa-02fa-4687-b414-fdfaf168ccc4", 00:11:41.461 "is_configured": true, 00:11:41.461 "data_offset": 2048, 00:11:41.461 "data_size": 63488 00:11:41.461 } 00:11:41.461 ] 00:11:41.461 }' 00:11:41.461 16:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.461 16:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.719 16:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.720 16:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.720 16:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.720 16:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:41.720 16:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.979 16:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:41.979 16:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:41.979 16:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.979 16:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.979 [2024-11-05 16:25:54.821595] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:41.979 16:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.979 16:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:41.979 16:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.979 16:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.979 16:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.979 16:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.979 16:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:41.979 16:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.979 16:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.979 16:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.979 16:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.979 16:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.979 16:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.979 16:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.979 16:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.979 16:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.979 16:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.979 "name": "Existed_Raid", 00:11:41.979 "uuid": "5416db10-9fb2-4c35-b002-361d0640768e", 00:11:41.979 "strip_size_kb": 0, 00:11:41.979 "state": "configuring", 00:11:41.979 "raid_level": "raid1", 00:11:41.979 "superblock": true, 00:11:41.979 "num_base_bdevs": 3, 00:11:41.979 "num_base_bdevs_discovered": 1, 00:11:41.979 "num_base_bdevs_operational": 3, 00:11:41.979 "base_bdevs_list": [ 00:11:41.979 { 00:11:41.979 "name": "BaseBdev1", 00:11:41.979 "uuid": "c4997757-a25b-4e6f-ba13-a89d615472ba", 00:11:41.979 "is_configured": true, 00:11:41.979 "data_offset": 2048, 00:11:41.979 "data_size": 63488 00:11:41.979 }, 00:11:41.979 { 00:11:41.979 "name": null, 00:11:41.979 "uuid": "a3acbd0a-05a4-48c0-96a0-bb7d47028afe", 00:11:41.979 "is_configured": false, 00:11:41.979 "data_offset": 0, 00:11:41.979 "data_size": 63488 00:11:41.979 }, 00:11:41.979 { 00:11:41.979 "name": null, 00:11:41.979 "uuid": "68b425fa-02fa-4687-b414-fdfaf168ccc4", 00:11:41.979 "is_configured": false, 00:11:41.979 "data_offset": 0, 00:11:41.979 "data_size": 63488 00:11:41.979 } 00:11:41.979 ] 00:11:41.979 }' 00:11:41.979 16:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.979 16:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.239 16:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.239 16:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:42.239 16:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.239 16:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.239 16:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.498 16:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:42.498 16:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:42.498 16:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.499 16:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.499 [2024-11-05 16:25:55.344734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:42.499 16:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.499 16:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:42.499 16:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.499 16:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.499 16:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.499 16:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.499 16:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:42.499 16:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.499 16:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.499 16:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.499 16:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.499 16:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.499 16:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.499 16:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.499 16:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.499 16:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.499 16:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.499 "name": "Existed_Raid", 00:11:42.499 "uuid": "5416db10-9fb2-4c35-b002-361d0640768e", 00:11:42.499 "strip_size_kb": 0, 00:11:42.499 "state": "configuring", 00:11:42.499 "raid_level": "raid1", 00:11:42.499 "superblock": true, 00:11:42.499 "num_base_bdevs": 3, 00:11:42.499 "num_base_bdevs_discovered": 2, 00:11:42.499 "num_base_bdevs_operational": 3, 00:11:42.499 "base_bdevs_list": [ 00:11:42.499 { 00:11:42.499 "name": "BaseBdev1", 00:11:42.499 "uuid": "c4997757-a25b-4e6f-ba13-a89d615472ba", 00:11:42.499 "is_configured": true, 00:11:42.499 "data_offset": 2048, 00:11:42.499 "data_size": 63488 00:11:42.499 }, 00:11:42.499 { 00:11:42.499 "name": null, 00:11:42.499 "uuid": "a3acbd0a-05a4-48c0-96a0-bb7d47028afe", 00:11:42.499 "is_configured": false, 00:11:42.499 "data_offset": 0, 00:11:42.499 "data_size": 63488 00:11:42.499 }, 00:11:42.499 { 00:11:42.499 "name": "BaseBdev3", 00:11:42.499 "uuid": "68b425fa-02fa-4687-b414-fdfaf168ccc4", 00:11:42.499 "is_configured": true, 00:11:42.499 "data_offset": 2048, 00:11:42.499 "data_size": 63488 00:11:42.499 } 00:11:42.499 ] 00:11:42.499 }' 00:11:42.499 16:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.499 16:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.758 16:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.758 16:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.758 16:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:42.758 16:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.758 16:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.758 16:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:42.758 16:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:42.758 16:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.758 16:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.017 [2024-11-05 16:25:55.848561] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:43.017 16:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.017 16:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:43.017 16:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.017 16:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:43.017 16:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.017 16:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.017 16:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:43.017 16:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.017 16:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.017 16:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.017 16:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.017 16:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.017 16:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.017 16:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.017 16:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.018 16:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.018 16:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.018 "name": "Existed_Raid", 00:11:43.018 "uuid": "5416db10-9fb2-4c35-b002-361d0640768e", 00:11:43.018 "strip_size_kb": 0, 00:11:43.018 "state": "configuring", 00:11:43.018 "raid_level": "raid1", 00:11:43.018 "superblock": true, 00:11:43.018 "num_base_bdevs": 3, 00:11:43.018 "num_base_bdevs_discovered": 1, 00:11:43.018 "num_base_bdevs_operational": 3, 00:11:43.018 "base_bdevs_list": [ 00:11:43.018 { 00:11:43.018 "name": null, 00:11:43.018 "uuid": "c4997757-a25b-4e6f-ba13-a89d615472ba", 00:11:43.018 "is_configured": false, 00:11:43.018 "data_offset": 0, 00:11:43.018 "data_size": 63488 00:11:43.018 }, 00:11:43.018 { 00:11:43.018 "name": null, 00:11:43.018 "uuid": "a3acbd0a-05a4-48c0-96a0-bb7d47028afe", 00:11:43.018 "is_configured": false, 00:11:43.018 "data_offset": 0, 00:11:43.018 "data_size": 63488 00:11:43.018 }, 00:11:43.018 { 00:11:43.018 "name": "BaseBdev3", 00:11:43.018 "uuid": "68b425fa-02fa-4687-b414-fdfaf168ccc4", 00:11:43.018 "is_configured": true, 00:11:43.018 "data_offset": 2048, 00:11:43.018 "data_size": 63488 00:11:43.018 } 00:11:43.018 ] 00:11:43.018 }' 00:11:43.018 16:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.018 16:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.588 16:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:43.588 16:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.588 16:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.588 16:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.588 16:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.588 16:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:43.588 16:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:43.588 16:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.588 16:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.588 [2024-11-05 16:25:56.453709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:43.588 16:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.588 16:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:43.588 16:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.588 16:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:43.588 16:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.588 16:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.588 16:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:43.588 16:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.588 16:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.588 16:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.588 16:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.588 16:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.588 16:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.588 16:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.588 16:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.588 16:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.588 16:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.588 "name": "Existed_Raid", 00:11:43.588 "uuid": "5416db10-9fb2-4c35-b002-361d0640768e", 00:11:43.588 "strip_size_kb": 0, 00:11:43.588 "state": "configuring", 00:11:43.588 "raid_level": "raid1", 00:11:43.588 "superblock": true, 00:11:43.588 "num_base_bdevs": 3, 00:11:43.588 "num_base_bdevs_discovered": 2, 00:11:43.588 "num_base_bdevs_operational": 3, 00:11:43.588 "base_bdevs_list": [ 00:11:43.588 { 00:11:43.588 "name": null, 00:11:43.588 "uuid": "c4997757-a25b-4e6f-ba13-a89d615472ba", 00:11:43.588 "is_configured": false, 00:11:43.588 "data_offset": 0, 00:11:43.588 "data_size": 63488 00:11:43.588 }, 00:11:43.588 { 00:11:43.589 "name": "BaseBdev2", 00:11:43.589 "uuid": "a3acbd0a-05a4-48c0-96a0-bb7d47028afe", 00:11:43.589 "is_configured": true, 00:11:43.589 "data_offset": 2048, 00:11:43.589 "data_size": 63488 00:11:43.589 }, 00:11:43.589 { 00:11:43.589 "name": "BaseBdev3", 00:11:43.589 "uuid": "68b425fa-02fa-4687-b414-fdfaf168ccc4", 00:11:43.589 "is_configured": true, 00:11:43.589 "data_offset": 2048, 00:11:43.589 "data_size": 63488 00:11:43.589 } 00:11:43.589 ] 00:11:43.589 }' 00:11:43.589 16:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.589 16:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.855 16:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.855 16:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.855 16:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.855 16:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:43.855 16:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.113 16:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:44.114 16:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.114 16:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:44.114 16:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.114 16:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.114 16:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.114 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c4997757-a25b-4e6f-ba13-a89d615472ba 00:11:44.114 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.114 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.114 [2024-11-05 16:25:57.054309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:44.114 [2024-11-05 16:25:57.054747] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:44.114 [2024-11-05 16:25:57.054805] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:44.114 [2024-11-05 16:25:57.055114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:44.114 NewBaseBdev 00:11:44.114 [2024-11-05 16:25:57.055336] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:44.114 [2024-11-05 16:25:57.055362] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:44.114 [2024-11-05 16:25:57.055514] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:44.114 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.114 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:44.114 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:11:44.114 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:44.114 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:44.114 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:44.114 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:44.114 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:44.114 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.114 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.114 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.114 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:44.114 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.114 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.114 [ 00:11:44.114 { 00:11:44.114 "name": "NewBaseBdev", 00:11:44.114 "aliases": [ 00:11:44.114 "c4997757-a25b-4e6f-ba13-a89d615472ba" 00:11:44.114 ], 00:11:44.114 "product_name": "Malloc disk", 00:11:44.114 "block_size": 512, 00:11:44.114 "num_blocks": 65536, 00:11:44.114 "uuid": "c4997757-a25b-4e6f-ba13-a89d615472ba", 00:11:44.114 "assigned_rate_limits": { 00:11:44.114 "rw_ios_per_sec": 0, 00:11:44.114 "rw_mbytes_per_sec": 0, 00:11:44.114 "r_mbytes_per_sec": 0, 00:11:44.114 "w_mbytes_per_sec": 0 00:11:44.114 }, 00:11:44.114 "claimed": true, 00:11:44.114 "claim_type": "exclusive_write", 00:11:44.114 "zoned": false, 00:11:44.114 "supported_io_types": { 00:11:44.114 "read": true, 00:11:44.114 "write": true, 00:11:44.114 "unmap": true, 00:11:44.114 "flush": true, 00:11:44.114 "reset": true, 00:11:44.114 "nvme_admin": false, 00:11:44.114 "nvme_io": false, 00:11:44.114 "nvme_io_md": false, 00:11:44.114 "write_zeroes": true, 00:11:44.114 "zcopy": true, 00:11:44.114 "get_zone_info": false, 00:11:44.114 "zone_management": false, 00:11:44.114 "zone_append": false, 00:11:44.114 "compare": false, 00:11:44.114 "compare_and_write": false, 00:11:44.114 "abort": true, 00:11:44.114 "seek_hole": false, 00:11:44.114 "seek_data": false, 00:11:44.114 "copy": true, 00:11:44.114 "nvme_iov_md": false 00:11:44.114 }, 00:11:44.114 "memory_domains": [ 00:11:44.114 { 00:11:44.114 "dma_device_id": "system", 00:11:44.114 "dma_device_type": 1 00:11:44.114 }, 00:11:44.114 { 00:11:44.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.114 "dma_device_type": 2 00:11:44.114 } 00:11:44.114 ], 00:11:44.114 "driver_specific": {} 00:11:44.114 } 00:11:44.114 ] 00:11:44.114 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.114 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:44.114 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:44.114 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.114 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:44.114 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.114 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.114 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:44.114 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.114 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.114 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.114 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.114 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.114 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.114 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.114 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.114 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.114 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.114 "name": "Existed_Raid", 00:11:44.114 "uuid": "5416db10-9fb2-4c35-b002-361d0640768e", 00:11:44.114 "strip_size_kb": 0, 00:11:44.114 "state": "online", 00:11:44.114 "raid_level": "raid1", 00:11:44.114 "superblock": true, 00:11:44.114 "num_base_bdevs": 3, 00:11:44.114 "num_base_bdevs_discovered": 3, 00:11:44.114 "num_base_bdevs_operational": 3, 00:11:44.114 "base_bdevs_list": [ 00:11:44.114 { 00:11:44.114 "name": "NewBaseBdev", 00:11:44.114 "uuid": "c4997757-a25b-4e6f-ba13-a89d615472ba", 00:11:44.114 "is_configured": true, 00:11:44.114 "data_offset": 2048, 00:11:44.114 "data_size": 63488 00:11:44.114 }, 00:11:44.114 { 00:11:44.114 "name": "BaseBdev2", 00:11:44.114 "uuid": "a3acbd0a-05a4-48c0-96a0-bb7d47028afe", 00:11:44.114 "is_configured": true, 00:11:44.114 "data_offset": 2048, 00:11:44.114 "data_size": 63488 00:11:44.114 }, 00:11:44.114 { 00:11:44.114 "name": "BaseBdev3", 00:11:44.114 "uuid": "68b425fa-02fa-4687-b414-fdfaf168ccc4", 00:11:44.114 "is_configured": true, 00:11:44.114 "data_offset": 2048, 00:11:44.114 "data_size": 63488 00:11:44.114 } 00:11:44.114 ] 00:11:44.114 }' 00:11:44.114 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.114 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.681 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:44.681 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:44.681 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:44.681 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:44.681 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:44.681 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:44.681 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:44.681 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:44.681 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.681 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.681 [2024-11-05 16:25:57.525983] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:44.681 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.681 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:44.681 "name": "Existed_Raid", 00:11:44.681 "aliases": [ 00:11:44.681 "5416db10-9fb2-4c35-b002-361d0640768e" 00:11:44.681 ], 00:11:44.681 "product_name": "Raid Volume", 00:11:44.681 "block_size": 512, 00:11:44.681 "num_blocks": 63488, 00:11:44.681 "uuid": "5416db10-9fb2-4c35-b002-361d0640768e", 00:11:44.681 "assigned_rate_limits": { 00:11:44.681 "rw_ios_per_sec": 0, 00:11:44.681 "rw_mbytes_per_sec": 0, 00:11:44.681 "r_mbytes_per_sec": 0, 00:11:44.681 "w_mbytes_per_sec": 0 00:11:44.681 }, 00:11:44.681 "claimed": false, 00:11:44.681 "zoned": false, 00:11:44.681 "supported_io_types": { 00:11:44.681 "read": true, 00:11:44.681 "write": true, 00:11:44.681 "unmap": false, 00:11:44.681 "flush": false, 00:11:44.681 "reset": true, 00:11:44.681 "nvme_admin": false, 00:11:44.681 "nvme_io": false, 00:11:44.681 "nvme_io_md": false, 00:11:44.681 "write_zeroes": true, 00:11:44.681 "zcopy": false, 00:11:44.681 "get_zone_info": false, 00:11:44.681 "zone_management": false, 00:11:44.681 "zone_append": false, 00:11:44.681 "compare": false, 00:11:44.681 "compare_and_write": false, 00:11:44.681 "abort": false, 00:11:44.681 "seek_hole": false, 00:11:44.681 "seek_data": false, 00:11:44.681 "copy": false, 00:11:44.681 "nvme_iov_md": false 00:11:44.681 }, 00:11:44.681 "memory_domains": [ 00:11:44.681 { 00:11:44.681 "dma_device_id": "system", 00:11:44.681 "dma_device_type": 1 00:11:44.681 }, 00:11:44.681 { 00:11:44.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.681 "dma_device_type": 2 00:11:44.681 }, 00:11:44.681 { 00:11:44.681 "dma_device_id": "system", 00:11:44.681 "dma_device_type": 1 00:11:44.681 }, 00:11:44.681 { 00:11:44.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.681 "dma_device_type": 2 00:11:44.681 }, 00:11:44.681 { 00:11:44.681 "dma_device_id": "system", 00:11:44.681 "dma_device_type": 1 00:11:44.681 }, 00:11:44.681 { 00:11:44.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.681 "dma_device_type": 2 00:11:44.681 } 00:11:44.681 ], 00:11:44.681 "driver_specific": { 00:11:44.681 "raid": { 00:11:44.681 "uuid": "5416db10-9fb2-4c35-b002-361d0640768e", 00:11:44.681 "strip_size_kb": 0, 00:11:44.681 "state": "online", 00:11:44.681 "raid_level": "raid1", 00:11:44.681 "superblock": true, 00:11:44.681 "num_base_bdevs": 3, 00:11:44.681 "num_base_bdevs_discovered": 3, 00:11:44.681 "num_base_bdevs_operational": 3, 00:11:44.681 "base_bdevs_list": [ 00:11:44.681 { 00:11:44.681 "name": "NewBaseBdev", 00:11:44.681 "uuid": "c4997757-a25b-4e6f-ba13-a89d615472ba", 00:11:44.681 "is_configured": true, 00:11:44.681 "data_offset": 2048, 00:11:44.681 "data_size": 63488 00:11:44.681 }, 00:11:44.681 { 00:11:44.681 "name": "BaseBdev2", 00:11:44.681 "uuid": "a3acbd0a-05a4-48c0-96a0-bb7d47028afe", 00:11:44.681 "is_configured": true, 00:11:44.681 "data_offset": 2048, 00:11:44.681 "data_size": 63488 00:11:44.681 }, 00:11:44.681 { 00:11:44.681 "name": "BaseBdev3", 00:11:44.681 "uuid": "68b425fa-02fa-4687-b414-fdfaf168ccc4", 00:11:44.681 "is_configured": true, 00:11:44.681 "data_offset": 2048, 00:11:44.681 "data_size": 63488 00:11:44.681 } 00:11:44.681 ] 00:11:44.681 } 00:11:44.681 } 00:11:44.681 }' 00:11:44.681 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:44.681 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:44.681 BaseBdev2 00:11:44.681 BaseBdev3' 00:11:44.681 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.681 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:44.681 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.681 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:44.681 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.681 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.681 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.681 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.681 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.681 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.681 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.681 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:44.681 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.681 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.681 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.681 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.681 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.681 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.681 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.940 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:44.940 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.940 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.940 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.941 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.941 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.941 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.941 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:44.941 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.941 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.941 [2024-11-05 16:25:57.825144] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:44.941 [2024-11-05 16:25:57.825264] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:44.941 [2024-11-05 16:25:57.825416] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:44.941 [2024-11-05 16:25:57.825809] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:44.941 [2024-11-05 16:25:57.825875] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:44.941 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.941 16:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68292 00:11:44.941 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 68292 ']' 00:11:44.941 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 68292 00:11:44.941 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:11:44.941 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:44.941 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68292 00:11:44.941 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:44.941 killing process with pid 68292 00:11:44.941 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:44.941 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68292' 00:11:44.941 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 68292 00:11:44.941 [2024-11-05 16:25:57.874597] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:44.941 16:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 68292 00:11:45.198 [2024-11-05 16:25:58.218908] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:46.575 16:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:46.575 00:11:46.575 real 0m11.207s 00:11:46.575 user 0m17.745s 00:11:46.575 sys 0m1.962s 00:11:46.575 ************************************ 00:11:46.575 END TEST raid_state_function_test_sb 00:11:46.575 ************************************ 00:11:46.575 16:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:46.575 16:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.575 16:25:59 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:11:46.575 16:25:59 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:46.575 16:25:59 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:46.575 16:25:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:46.575 ************************************ 00:11:46.575 START TEST raid_superblock_test 00:11:46.575 ************************************ 00:11:46.575 16:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 3 00:11:46.575 16:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:46.575 16:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:11:46.575 16:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:46.575 16:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:46.575 16:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:46.575 16:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:46.575 16:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:46.575 16:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:46.575 16:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:46.575 16:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:46.575 16:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:46.575 16:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:46.575 16:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:46.575 16:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:46.575 16:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:46.575 16:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68924 00:11:46.575 16:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68924 00:11:46.575 16:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:46.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.575 16:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 68924 ']' 00:11:46.575 16:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.575 16:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:46.575 16:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.575 16:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:46.575 16:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.575 [2024-11-05 16:25:59.573210] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:11:46.575 [2024-11-05 16:25:59.573415] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68924 ] 00:11:46.833 [2024-11-05 16:25:59.748880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.833 [2024-11-05 16:25:59.867360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.090 [2024-11-05 16:26:00.085623] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:47.090 [2024-11-05 16:26:00.085781] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:47.349 16:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:47.349 16:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:11:47.349 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:47.349 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:47.349 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:47.349 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:47.349 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:47.349 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:47.349 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:47.349 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:47.349 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:47.349 16:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.607 16:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.607 malloc1 00:11:47.607 16:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.607 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:47.607 16:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.607 16:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.607 [2024-11-05 16:26:00.491752] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:47.607 [2024-11-05 16:26:00.491822] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.607 [2024-11-05 16:26:00.491849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:47.607 [2024-11-05 16:26:00.491858] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.607 [2024-11-05 16:26:00.494152] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.607 [2024-11-05 16:26:00.494192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:47.607 pt1 00:11:47.607 16:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.607 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:47.607 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:47.607 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:47.607 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:47.607 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:47.607 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:47.607 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:47.607 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:47.607 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:47.607 16:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.607 16:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.607 malloc2 00:11:47.607 16:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.607 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:47.607 16:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.607 16:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.607 [2024-11-05 16:26:00.548350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:47.607 [2024-11-05 16:26:00.548479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.607 [2024-11-05 16:26:00.548549] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:47.607 [2024-11-05 16:26:00.548609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.607 [2024-11-05 16:26:00.551040] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.607 [2024-11-05 16:26:00.551078] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:47.607 pt2 00:11:47.607 16:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.607 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:47.607 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:47.607 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:47.607 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:47.607 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:47.607 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:47.607 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:47.607 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:47.607 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:47.607 16:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.607 16:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.607 malloc3 00:11:47.607 16:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.607 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:47.608 16:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.608 16:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.608 [2024-11-05 16:26:00.617659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:47.608 [2024-11-05 16:26:00.617768] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.608 [2024-11-05 16:26:00.617809] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:47.608 [2024-11-05 16:26:00.617838] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.608 [2024-11-05 16:26:00.620207] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.608 [2024-11-05 16:26:00.620286] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:47.608 pt3 00:11:47.608 16:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.608 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:47.608 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:47.608 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:11:47.608 16:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.608 16:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.608 [2024-11-05 16:26:00.629731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:47.608 [2024-11-05 16:26:00.631739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:47.608 [2024-11-05 16:26:00.631846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:47.608 [2024-11-05 16:26:00.632060] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:47.608 [2024-11-05 16:26:00.632118] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:47.608 [2024-11-05 16:26:00.632433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:47.608 [2024-11-05 16:26:00.632687] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:47.608 [2024-11-05 16:26:00.632740] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:47.608 [2024-11-05 16:26:00.632955] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.608 16:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.608 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:47.608 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.608 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.608 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.608 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.608 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:47.608 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.608 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.608 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.608 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.608 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.608 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.608 16:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.608 16:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.608 16:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.608 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.608 "name": "raid_bdev1", 00:11:47.608 "uuid": "2679a4b2-5b2a-49d4-8a1f-bccec4c2e7ff", 00:11:47.608 "strip_size_kb": 0, 00:11:47.608 "state": "online", 00:11:47.608 "raid_level": "raid1", 00:11:47.608 "superblock": true, 00:11:47.608 "num_base_bdevs": 3, 00:11:47.608 "num_base_bdevs_discovered": 3, 00:11:47.608 "num_base_bdevs_operational": 3, 00:11:47.608 "base_bdevs_list": [ 00:11:47.608 { 00:11:47.608 "name": "pt1", 00:11:47.608 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:47.608 "is_configured": true, 00:11:47.608 "data_offset": 2048, 00:11:47.608 "data_size": 63488 00:11:47.608 }, 00:11:47.608 { 00:11:47.608 "name": "pt2", 00:11:47.608 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:47.608 "is_configured": true, 00:11:47.608 "data_offset": 2048, 00:11:47.608 "data_size": 63488 00:11:47.608 }, 00:11:47.608 { 00:11:47.608 "name": "pt3", 00:11:47.608 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:47.608 "is_configured": true, 00:11:47.608 "data_offset": 2048, 00:11:47.608 "data_size": 63488 00:11:47.608 } 00:11:47.608 ] 00:11:47.608 }' 00:11:47.608 16:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.608 16:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.175 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:48.175 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:48.175 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:48.175 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:48.175 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:48.175 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:48.175 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:48.175 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:48.175 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.175 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.175 [2024-11-05 16:26:01.109262] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:48.175 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.175 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:48.175 "name": "raid_bdev1", 00:11:48.175 "aliases": [ 00:11:48.175 "2679a4b2-5b2a-49d4-8a1f-bccec4c2e7ff" 00:11:48.175 ], 00:11:48.175 "product_name": "Raid Volume", 00:11:48.175 "block_size": 512, 00:11:48.175 "num_blocks": 63488, 00:11:48.175 "uuid": "2679a4b2-5b2a-49d4-8a1f-bccec4c2e7ff", 00:11:48.175 "assigned_rate_limits": { 00:11:48.175 "rw_ios_per_sec": 0, 00:11:48.175 "rw_mbytes_per_sec": 0, 00:11:48.175 "r_mbytes_per_sec": 0, 00:11:48.175 "w_mbytes_per_sec": 0 00:11:48.175 }, 00:11:48.175 "claimed": false, 00:11:48.175 "zoned": false, 00:11:48.175 "supported_io_types": { 00:11:48.175 "read": true, 00:11:48.175 "write": true, 00:11:48.175 "unmap": false, 00:11:48.175 "flush": false, 00:11:48.175 "reset": true, 00:11:48.175 "nvme_admin": false, 00:11:48.175 "nvme_io": false, 00:11:48.175 "nvme_io_md": false, 00:11:48.175 "write_zeroes": true, 00:11:48.175 "zcopy": false, 00:11:48.175 "get_zone_info": false, 00:11:48.175 "zone_management": false, 00:11:48.175 "zone_append": false, 00:11:48.175 "compare": false, 00:11:48.175 "compare_and_write": false, 00:11:48.175 "abort": false, 00:11:48.175 "seek_hole": false, 00:11:48.175 "seek_data": false, 00:11:48.175 "copy": false, 00:11:48.175 "nvme_iov_md": false 00:11:48.175 }, 00:11:48.175 "memory_domains": [ 00:11:48.175 { 00:11:48.175 "dma_device_id": "system", 00:11:48.175 "dma_device_type": 1 00:11:48.175 }, 00:11:48.175 { 00:11:48.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.175 "dma_device_type": 2 00:11:48.176 }, 00:11:48.176 { 00:11:48.176 "dma_device_id": "system", 00:11:48.176 "dma_device_type": 1 00:11:48.176 }, 00:11:48.176 { 00:11:48.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.176 "dma_device_type": 2 00:11:48.176 }, 00:11:48.176 { 00:11:48.176 "dma_device_id": "system", 00:11:48.176 "dma_device_type": 1 00:11:48.176 }, 00:11:48.176 { 00:11:48.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.176 "dma_device_type": 2 00:11:48.176 } 00:11:48.176 ], 00:11:48.176 "driver_specific": { 00:11:48.176 "raid": { 00:11:48.176 "uuid": "2679a4b2-5b2a-49d4-8a1f-bccec4c2e7ff", 00:11:48.176 "strip_size_kb": 0, 00:11:48.176 "state": "online", 00:11:48.176 "raid_level": "raid1", 00:11:48.176 "superblock": true, 00:11:48.176 "num_base_bdevs": 3, 00:11:48.176 "num_base_bdevs_discovered": 3, 00:11:48.176 "num_base_bdevs_operational": 3, 00:11:48.176 "base_bdevs_list": [ 00:11:48.176 { 00:11:48.176 "name": "pt1", 00:11:48.176 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:48.176 "is_configured": true, 00:11:48.176 "data_offset": 2048, 00:11:48.176 "data_size": 63488 00:11:48.176 }, 00:11:48.176 { 00:11:48.176 "name": "pt2", 00:11:48.176 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:48.176 "is_configured": true, 00:11:48.176 "data_offset": 2048, 00:11:48.176 "data_size": 63488 00:11:48.176 }, 00:11:48.176 { 00:11:48.176 "name": "pt3", 00:11:48.176 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:48.176 "is_configured": true, 00:11:48.176 "data_offset": 2048, 00:11:48.176 "data_size": 63488 00:11:48.176 } 00:11:48.176 ] 00:11:48.176 } 00:11:48.176 } 00:11:48.176 }' 00:11:48.176 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:48.176 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:48.176 pt2 00:11:48.176 pt3' 00:11:48.176 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.176 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:48.176 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.176 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:48.176 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.176 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.176 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.176 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.434 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.434 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.434 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.434 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:48.434 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.434 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.434 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.434 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.434 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.434 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.434 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.434 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:48.434 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.434 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.434 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.434 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.434 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.434 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.434 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:48.434 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.434 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:48.434 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.434 [2024-11-05 16:26:01.388879] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:48.434 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.434 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2679a4b2-5b2a-49d4-8a1f-bccec4c2e7ff 00:11:48.434 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2679a4b2-5b2a-49d4-8a1f-bccec4c2e7ff ']' 00:11:48.434 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:48.434 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.434 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.434 [2024-11-05 16:26:01.432449] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:48.434 [2024-11-05 16:26:01.432572] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:48.434 [2024-11-05 16:26:01.432708] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:48.434 [2024-11-05 16:26:01.432826] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:48.434 [2024-11-05 16:26:01.432880] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:48.434 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.434 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:48.434 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.435 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.435 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.435 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.435 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:48.435 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:48.435 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:48.435 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:48.435 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.435 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.435 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.435 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:48.435 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:48.435 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.435 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.435 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.435 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:48.435 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:48.435 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.435 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.435 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.693 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:48.693 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.693 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:48.693 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.693 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.693 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:48.693 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:48.693 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:48.693 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:48.693 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:48.693 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:48.693 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:48.693 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:48.693 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:48.693 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.693 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.693 [2024-11-05 16:26:01.584263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:48.693 [2024-11-05 16:26:01.586300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:48.693 [2024-11-05 16:26:01.586404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:48.693 [2024-11-05 16:26:01.586461] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:48.693 [2024-11-05 16:26:01.586546] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:48.693 [2024-11-05 16:26:01.586569] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:48.693 [2024-11-05 16:26:01.586588] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:48.693 [2024-11-05 16:26:01.586599] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:48.693 request: 00:11:48.693 { 00:11:48.693 "name": "raid_bdev1", 00:11:48.693 "raid_level": "raid1", 00:11:48.693 "base_bdevs": [ 00:11:48.693 "malloc1", 00:11:48.693 "malloc2", 00:11:48.693 "malloc3" 00:11:48.693 ], 00:11:48.693 "superblock": false, 00:11:48.693 "method": "bdev_raid_create", 00:11:48.693 "req_id": 1 00:11:48.693 } 00:11:48.693 Got JSON-RPC error response 00:11:48.693 response: 00:11:48.693 { 00:11:48.693 "code": -17, 00:11:48.693 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:48.693 } 00:11:48.693 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:48.693 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:48.693 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:48.694 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:48.694 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:48.694 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.694 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.694 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.694 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:48.694 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.694 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:48.694 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:48.694 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:48.694 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.694 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.694 [2024-11-05 16:26:01.652089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:48.694 [2024-11-05 16:26:01.652215] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.694 [2024-11-05 16:26:01.652266] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:48.694 [2024-11-05 16:26:01.652304] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.694 [2024-11-05 16:26:01.654820] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.694 [2024-11-05 16:26:01.654897] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:48.694 [2024-11-05 16:26:01.655019] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:48.694 [2024-11-05 16:26:01.655114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:48.694 pt1 00:11:48.694 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.694 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:48.694 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.694 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.694 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.694 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.694 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:48.694 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.694 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.694 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.694 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.694 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.694 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.694 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.694 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.694 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.694 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.694 "name": "raid_bdev1", 00:11:48.694 "uuid": "2679a4b2-5b2a-49d4-8a1f-bccec4c2e7ff", 00:11:48.694 "strip_size_kb": 0, 00:11:48.694 "state": "configuring", 00:11:48.694 "raid_level": "raid1", 00:11:48.694 "superblock": true, 00:11:48.694 "num_base_bdevs": 3, 00:11:48.694 "num_base_bdevs_discovered": 1, 00:11:48.694 "num_base_bdevs_operational": 3, 00:11:48.694 "base_bdevs_list": [ 00:11:48.694 { 00:11:48.694 "name": "pt1", 00:11:48.694 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:48.694 "is_configured": true, 00:11:48.694 "data_offset": 2048, 00:11:48.694 "data_size": 63488 00:11:48.694 }, 00:11:48.694 { 00:11:48.694 "name": null, 00:11:48.694 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:48.694 "is_configured": false, 00:11:48.694 "data_offset": 2048, 00:11:48.694 "data_size": 63488 00:11:48.694 }, 00:11:48.694 { 00:11:48.694 "name": null, 00:11:48.694 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:48.694 "is_configured": false, 00:11:48.694 "data_offset": 2048, 00:11:48.694 "data_size": 63488 00:11:48.694 } 00:11:48.694 ] 00:11:48.694 }' 00:11:48.694 16:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.694 16:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.261 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:49.261 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:49.261 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.261 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.261 [2024-11-05 16:26:02.143290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:49.261 [2024-11-05 16:26:02.143375] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.261 [2024-11-05 16:26:02.143403] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:49.261 [2024-11-05 16:26:02.143414] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.261 [2024-11-05 16:26:02.144004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.261 [2024-11-05 16:26:02.144053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:49.261 [2024-11-05 16:26:02.144177] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:49.261 [2024-11-05 16:26:02.144207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:49.261 pt2 00:11:49.261 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.261 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:49.261 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.261 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.261 [2024-11-05 16:26:02.155264] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:49.261 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.261 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:49.261 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:49.261 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.261 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.261 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.261 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:49.261 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.261 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.261 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.261 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.261 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.261 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.261 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.261 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.261 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.261 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.261 "name": "raid_bdev1", 00:11:49.261 "uuid": "2679a4b2-5b2a-49d4-8a1f-bccec4c2e7ff", 00:11:49.261 "strip_size_kb": 0, 00:11:49.261 "state": "configuring", 00:11:49.261 "raid_level": "raid1", 00:11:49.261 "superblock": true, 00:11:49.261 "num_base_bdevs": 3, 00:11:49.261 "num_base_bdevs_discovered": 1, 00:11:49.261 "num_base_bdevs_operational": 3, 00:11:49.261 "base_bdevs_list": [ 00:11:49.261 { 00:11:49.261 "name": "pt1", 00:11:49.261 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:49.261 "is_configured": true, 00:11:49.261 "data_offset": 2048, 00:11:49.261 "data_size": 63488 00:11:49.261 }, 00:11:49.261 { 00:11:49.261 "name": null, 00:11:49.261 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:49.261 "is_configured": false, 00:11:49.261 "data_offset": 0, 00:11:49.262 "data_size": 63488 00:11:49.262 }, 00:11:49.262 { 00:11:49.262 "name": null, 00:11:49.262 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:49.262 "is_configured": false, 00:11:49.262 "data_offset": 2048, 00:11:49.262 "data_size": 63488 00:11:49.262 } 00:11:49.262 ] 00:11:49.262 }' 00:11:49.262 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.262 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.828 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:49.828 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:49.828 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:49.828 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.828 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.828 [2024-11-05 16:26:02.626447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:49.828 [2024-11-05 16:26:02.626628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.828 [2024-11-05 16:26:02.626682] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:49.828 [2024-11-05 16:26:02.626726] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.828 [2024-11-05 16:26:02.627276] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.828 [2024-11-05 16:26:02.627350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:49.828 [2024-11-05 16:26:02.627480] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:49.828 [2024-11-05 16:26:02.627572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:49.828 pt2 00:11:49.828 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.828 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:49.828 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:49.828 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:49.828 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.828 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.828 [2024-11-05 16:26:02.638411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:49.828 [2024-11-05 16:26:02.638516] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.828 [2024-11-05 16:26:02.638572] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:49.828 [2024-11-05 16:26:02.638634] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.828 [2024-11-05 16:26:02.639114] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.828 [2024-11-05 16:26:02.639216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:49.828 [2024-11-05 16:26:02.639392] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:49.828 [2024-11-05 16:26:02.639481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:49.828 [2024-11-05 16:26:02.639710] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:49.828 [2024-11-05 16:26:02.639768] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:49.828 [2024-11-05 16:26:02.640118] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:49.828 [2024-11-05 16:26:02.640335] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:49.828 [2024-11-05 16:26:02.640400] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:49.828 [2024-11-05 16:26:02.640638] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:49.828 pt3 00:11:49.828 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.828 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:49.828 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:49.828 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:49.828 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:49.828 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:49.828 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.828 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.828 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:49.828 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.828 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.828 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.828 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.828 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.828 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.828 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.828 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.828 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.828 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.828 "name": "raid_bdev1", 00:11:49.828 "uuid": "2679a4b2-5b2a-49d4-8a1f-bccec4c2e7ff", 00:11:49.828 "strip_size_kb": 0, 00:11:49.828 "state": "online", 00:11:49.828 "raid_level": "raid1", 00:11:49.828 "superblock": true, 00:11:49.828 "num_base_bdevs": 3, 00:11:49.828 "num_base_bdevs_discovered": 3, 00:11:49.828 "num_base_bdevs_operational": 3, 00:11:49.828 "base_bdevs_list": [ 00:11:49.828 { 00:11:49.828 "name": "pt1", 00:11:49.828 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:49.828 "is_configured": true, 00:11:49.828 "data_offset": 2048, 00:11:49.828 "data_size": 63488 00:11:49.828 }, 00:11:49.828 { 00:11:49.828 "name": "pt2", 00:11:49.828 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:49.828 "is_configured": true, 00:11:49.828 "data_offset": 2048, 00:11:49.828 "data_size": 63488 00:11:49.828 }, 00:11:49.828 { 00:11:49.828 "name": "pt3", 00:11:49.828 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:49.828 "is_configured": true, 00:11:49.828 "data_offset": 2048, 00:11:49.828 "data_size": 63488 00:11:49.828 } 00:11:49.828 ] 00:11:49.828 }' 00:11:49.828 16:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.828 16:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.087 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:50.087 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:50.087 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:50.087 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:50.087 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:50.087 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:50.087 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:50.087 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.087 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.087 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:50.087 [2024-11-05 16:26:03.102029] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:50.087 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.087 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:50.087 "name": "raid_bdev1", 00:11:50.087 "aliases": [ 00:11:50.087 "2679a4b2-5b2a-49d4-8a1f-bccec4c2e7ff" 00:11:50.087 ], 00:11:50.087 "product_name": "Raid Volume", 00:11:50.087 "block_size": 512, 00:11:50.087 "num_blocks": 63488, 00:11:50.087 "uuid": "2679a4b2-5b2a-49d4-8a1f-bccec4c2e7ff", 00:11:50.087 "assigned_rate_limits": { 00:11:50.087 "rw_ios_per_sec": 0, 00:11:50.087 "rw_mbytes_per_sec": 0, 00:11:50.087 "r_mbytes_per_sec": 0, 00:11:50.087 "w_mbytes_per_sec": 0 00:11:50.087 }, 00:11:50.087 "claimed": false, 00:11:50.087 "zoned": false, 00:11:50.087 "supported_io_types": { 00:11:50.087 "read": true, 00:11:50.087 "write": true, 00:11:50.087 "unmap": false, 00:11:50.087 "flush": false, 00:11:50.087 "reset": true, 00:11:50.087 "nvme_admin": false, 00:11:50.087 "nvme_io": false, 00:11:50.087 "nvme_io_md": false, 00:11:50.087 "write_zeroes": true, 00:11:50.087 "zcopy": false, 00:11:50.087 "get_zone_info": false, 00:11:50.087 "zone_management": false, 00:11:50.087 "zone_append": false, 00:11:50.087 "compare": false, 00:11:50.087 "compare_and_write": false, 00:11:50.087 "abort": false, 00:11:50.087 "seek_hole": false, 00:11:50.087 "seek_data": false, 00:11:50.087 "copy": false, 00:11:50.087 "nvme_iov_md": false 00:11:50.087 }, 00:11:50.087 "memory_domains": [ 00:11:50.087 { 00:11:50.087 "dma_device_id": "system", 00:11:50.087 "dma_device_type": 1 00:11:50.087 }, 00:11:50.087 { 00:11:50.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.087 "dma_device_type": 2 00:11:50.087 }, 00:11:50.087 { 00:11:50.087 "dma_device_id": "system", 00:11:50.087 "dma_device_type": 1 00:11:50.087 }, 00:11:50.087 { 00:11:50.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.087 "dma_device_type": 2 00:11:50.087 }, 00:11:50.087 { 00:11:50.087 "dma_device_id": "system", 00:11:50.087 "dma_device_type": 1 00:11:50.087 }, 00:11:50.087 { 00:11:50.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.087 "dma_device_type": 2 00:11:50.087 } 00:11:50.087 ], 00:11:50.087 "driver_specific": { 00:11:50.087 "raid": { 00:11:50.087 "uuid": "2679a4b2-5b2a-49d4-8a1f-bccec4c2e7ff", 00:11:50.087 "strip_size_kb": 0, 00:11:50.087 "state": "online", 00:11:50.087 "raid_level": "raid1", 00:11:50.087 "superblock": true, 00:11:50.087 "num_base_bdevs": 3, 00:11:50.087 "num_base_bdevs_discovered": 3, 00:11:50.087 "num_base_bdevs_operational": 3, 00:11:50.087 "base_bdevs_list": [ 00:11:50.087 { 00:11:50.087 "name": "pt1", 00:11:50.087 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:50.087 "is_configured": true, 00:11:50.087 "data_offset": 2048, 00:11:50.087 "data_size": 63488 00:11:50.087 }, 00:11:50.087 { 00:11:50.087 "name": "pt2", 00:11:50.087 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:50.087 "is_configured": true, 00:11:50.087 "data_offset": 2048, 00:11:50.087 "data_size": 63488 00:11:50.087 }, 00:11:50.087 { 00:11:50.087 "name": "pt3", 00:11:50.087 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:50.087 "is_configured": true, 00:11:50.087 "data_offset": 2048, 00:11:50.087 "data_size": 63488 00:11:50.087 } 00:11:50.087 ] 00:11:50.087 } 00:11:50.087 } 00:11:50.087 }' 00:11:50.087 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:50.345 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:50.345 pt2 00:11:50.345 pt3' 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:50.346 [2024-11-05 16:26:03.377511] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2679a4b2-5b2a-49d4-8a1f-bccec4c2e7ff '!=' 2679a4b2-5b2a-49d4-8a1f-bccec4c2e7ff ']' 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.346 [2024-11-05 16:26:03.425183] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.346 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.604 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.604 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.604 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.604 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.604 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.604 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.604 "name": "raid_bdev1", 00:11:50.604 "uuid": "2679a4b2-5b2a-49d4-8a1f-bccec4c2e7ff", 00:11:50.604 "strip_size_kb": 0, 00:11:50.604 "state": "online", 00:11:50.604 "raid_level": "raid1", 00:11:50.604 "superblock": true, 00:11:50.604 "num_base_bdevs": 3, 00:11:50.604 "num_base_bdevs_discovered": 2, 00:11:50.604 "num_base_bdevs_operational": 2, 00:11:50.604 "base_bdevs_list": [ 00:11:50.604 { 00:11:50.604 "name": null, 00:11:50.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.604 "is_configured": false, 00:11:50.604 "data_offset": 0, 00:11:50.604 "data_size": 63488 00:11:50.604 }, 00:11:50.604 { 00:11:50.604 "name": "pt2", 00:11:50.604 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:50.604 "is_configured": true, 00:11:50.604 "data_offset": 2048, 00:11:50.604 "data_size": 63488 00:11:50.604 }, 00:11:50.604 { 00:11:50.604 "name": "pt3", 00:11:50.604 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:50.604 "is_configured": true, 00:11:50.604 "data_offset": 2048, 00:11:50.604 "data_size": 63488 00:11:50.604 } 00:11:50.604 ] 00:11:50.604 }' 00:11:50.604 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.604 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.863 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:50.863 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.863 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.863 [2024-11-05 16:26:03.876628] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:50.863 [2024-11-05 16:26:03.876717] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:50.863 [2024-11-05 16:26:03.876832] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:50.863 [2024-11-05 16:26:03.876936] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:50.863 [2024-11-05 16:26:03.876997] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:50.863 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.863 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:50.863 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.863 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.863 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.863 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.863 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:50.863 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:50.863 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:50.863 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:50.863 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:50.863 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.863 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.863 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.863 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:50.863 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:50.863 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:50.863 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.863 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.863 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.863 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:50.863 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:50.863 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:50.863 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:50.863 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:50.863 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.863 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.863 [2024-11-05 16:26:03.944445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:50.863 [2024-11-05 16:26:03.944531] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.863 [2024-11-05 16:26:03.944567] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:11:50.863 [2024-11-05 16:26:03.944579] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.863 [2024-11-05 16:26:03.947019] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.863 [2024-11-05 16:26:03.947063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:50.863 [2024-11-05 16:26:03.947149] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:50.863 [2024-11-05 16:26:03.947210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:50.863 pt2 00:11:50.863 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.863 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:50.863 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:50.863 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.863 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.863 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.863 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:50.863 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.863 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.863 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.863 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.121 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.121 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.121 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.121 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.121 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.121 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.121 "name": "raid_bdev1", 00:11:51.121 "uuid": "2679a4b2-5b2a-49d4-8a1f-bccec4c2e7ff", 00:11:51.121 "strip_size_kb": 0, 00:11:51.121 "state": "configuring", 00:11:51.121 "raid_level": "raid1", 00:11:51.121 "superblock": true, 00:11:51.121 "num_base_bdevs": 3, 00:11:51.121 "num_base_bdevs_discovered": 1, 00:11:51.121 "num_base_bdevs_operational": 2, 00:11:51.122 "base_bdevs_list": [ 00:11:51.122 { 00:11:51.122 "name": null, 00:11:51.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.122 "is_configured": false, 00:11:51.122 "data_offset": 2048, 00:11:51.122 "data_size": 63488 00:11:51.122 }, 00:11:51.122 { 00:11:51.122 "name": "pt2", 00:11:51.122 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:51.122 "is_configured": true, 00:11:51.122 "data_offset": 2048, 00:11:51.122 "data_size": 63488 00:11:51.122 }, 00:11:51.122 { 00:11:51.122 "name": null, 00:11:51.122 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:51.122 "is_configured": false, 00:11:51.122 "data_offset": 2048, 00:11:51.122 "data_size": 63488 00:11:51.122 } 00:11:51.122 ] 00:11:51.122 }' 00:11:51.122 16:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.122 16:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.380 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:51.380 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:51.380 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:11:51.380 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:51.380 16:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.380 16:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.380 [2024-11-05 16:26:04.407697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:51.380 [2024-11-05 16:26:04.407855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.380 [2024-11-05 16:26:04.407902] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:51.380 [2024-11-05 16:26:04.407965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.380 [2024-11-05 16:26:04.408623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.380 [2024-11-05 16:26:04.408695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:51.380 [2024-11-05 16:26:04.408846] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:51.380 [2024-11-05 16:26:04.408911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:51.380 [2024-11-05 16:26:04.409089] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:51.380 [2024-11-05 16:26:04.409136] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:51.380 [2024-11-05 16:26:04.409473] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:51.380 [2024-11-05 16:26:04.409714] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:51.380 [2024-11-05 16:26:04.409763] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:51.380 [2024-11-05 16:26:04.409987] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:51.380 pt3 00:11:51.380 16:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.380 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:51.380 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.380 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:51.380 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.380 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.380 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:51.380 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.380 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.380 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.380 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.380 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.380 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.380 16:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.380 16:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.380 16:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.380 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.380 "name": "raid_bdev1", 00:11:51.380 "uuid": "2679a4b2-5b2a-49d4-8a1f-bccec4c2e7ff", 00:11:51.380 "strip_size_kb": 0, 00:11:51.380 "state": "online", 00:11:51.380 "raid_level": "raid1", 00:11:51.380 "superblock": true, 00:11:51.380 "num_base_bdevs": 3, 00:11:51.380 "num_base_bdevs_discovered": 2, 00:11:51.380 "num_base_bdevs_operational": 2, 00:11:51.380 "base_bdevs_list": [ 00:11:51.380 { 00:11:51.380 "name": null, 00:11:51.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.380 "is_configured": false, 00:11:51.380 "data_offset": 2048, 00:11:51.380 "data_size": 63488 00:11:51.380 }, 00:11:51.380 { 00:11:51.380 "name": "pt2", 00:11:51.380 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:51.380 "is_configured": true, 00:11:51.380 "data_offset": 2048, 00:11:51.380 "data_size": 63488 00:11:51.380 }, 00:11:51.380 { 00:11:51.380 "name": "pt3", 00:11:51.380 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:51.380 "is_configured": true, 00:11:51.380 "data_offset": 2048, 00:11:51.380 "data_size": 63488 00:11:51.380 } 00:11:51.380 ] 00:11:51.380 }' 00:11:51.380 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.380 16:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.946 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:51.946 16:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.946 16:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.946 [2024-11-05 16:26:04.834943] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:51.946 [2024-11-05 16:26:04.834982] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:51.946 [2024-11-05 16:26:04.835077] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:51.946 [2024-11-05 16:26:04.835149] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:51.946 [2024-11-05 16:26:04.835160] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:51.946 16:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.946 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.946 16:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.946 16:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.946 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:51.946 16:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.946 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:51.946 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:51.946 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:11:51.946 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:11:51.946 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:11:51.946 16:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.946 16:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.946 16:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.946 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:51.946 16:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.946 16:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.946 [2024-11-05 16:26:04.910837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:51.946 [2024-11-05 16:26:04.910908] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.946 [2024-11-05 16:26:04.910932] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:51.946 [2024-11-05 16:26:04.910942] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.946 [2024-11-05 16:26:04.913405] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.946 [2024-11-05 16:26:04.913448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:51.946 [2024-11-05 16:26:04.913561] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:51.946 [2024-11-05 16:26:04.913612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:51.946 [2024-11-05 16:26:04.913750] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:51.946 [2024-11-05 16:26:04.913761] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:51.946 [2024-11-05 16:26:04.913780] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:51.946 [2024-11-05 16:26:04.913854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:51.946 pt1 00:11:51.946 16:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.946 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:11:51.946 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:51.946 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.946 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.946 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.946 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.946 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:51.946 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.946 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.946 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.946 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.946 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.946 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.946 16:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.946 16:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.946 16:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.946 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.946 "name": "raid_bdev1", 00:11:51.946 "uuid": "2679a4b2-5b2a-49d4-8a1f-bccec4c2e7ff", 00:11:51.946 "strip_size_kb": 0, 00:11:51.946 "state": "configuring", 00:11:51.946 "raid_level": "raid1", 00:11:51.946 "superblock": true, 00:11:51.946 "num_base_bdevs": 3, 00:11:51.946 "num_base_bdevs_discovered": 1, 00:11:51.946 "num_base_bdevs_operational": 2, 00:11:51.946 "base_bdevs_list": [ 00:11:51.946 { 00:11:51.946 "name": null, 00:11:51.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.946 "is_configured": false, 00:11:51.946 "data_offset": 2048, 00:11:51.946 "data_size": 63488 00:11:51.946 }, 00:11:51.946 { 00:11:51.946 "name": "pt2", 00:11:51.946 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:51.946 "is_configured": true, 00:11:51.946 "data_offset": 2048, 00:11:51.946 "data_size": 63488 00:11:51.946 }, 00:11:51.946 { 00:11:51.946 "name": null, 00:11:51.946 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:51.946 "is_configured": false, 00:11:51.946 "data_offset": 2048, 00:11:51.946 "data_size": 63488 00:11:51.946 } 00:11:51.946 ] 00:11:51.946 }' 00:11:51.946 16:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.946 16:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.512 16:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:52.512 16:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:52.512 16:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.512 16:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.512 16:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.512 16:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:52.512 16:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:52.512 16:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.512 16:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.512 [2024-11-05 16:26:05.429995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:52.512 [2024-11-05 16:26:05.430127] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.512 [2024-11-05 16:26:05.430171] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:52.513 [2024-11-05 16:26:05.430203] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.513 [2024-11-05 16:26:05.430738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.513 [2024-11-05 16:26:05.430801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:52.513 [2024-11-05 16:26:05.430922] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:52.513 [2024-11-05 16:26:05.431005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:52.513 [2024-11-05 16:26:05.431194] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:52.513 [2024-11-05 16:26:05.431236] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:52.513 [2024-11-05 16:26:05.431541] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:52.513 [2024-11-05 16:26:05.431763] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:52.513 [2024-11-05 16:26:05.431813] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:52.513 [2024-11-05 16:26:05.432028] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.513 pt3 00:11:52.513 16:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.513 16:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:52.513 16:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:52.513 16:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.513 16:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.513 16:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.513 16:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:52.513 16:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.513 16:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.513 16:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.513 16:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.513 16:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.513 16:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.513 16:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.513 16:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.513 16:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.513 16:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.513 "name": "raid_bdev1", 00:11:52.513 "uuid": "2679a4b2-5b2a-49d4-8a1f-bccec4c2e7ff", 00:11:52.513 "strip_size_kb": 0, 00:11:52.513 "state": "online", 00:11:52.513 "raid_level": "raid1", 00:11:52.513 "superblock": true, 00:11:52.513 "num_base_bdevs": 3, 00:11:52.513 "num_base_bdevs_discovered": 2, 00:11:52.513 "num_base_bdevs_operational": 2, 00:11:52.513 "base_bdevs_list": [ 00:11:52.513 { 00:11:52.513 "name": null, 00:11:52.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.513 "is_configured": false, 00:11:52.513 "data_offset": 2048, 00:11:52.513 "data_size": 63488 00:11:52.513 }, 00:11:52.513 { 00:11:52.513 "name": "pt2", 00:11:52.513 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:52.513 "is_configured": true, 00:11:52.513 "data_offset": 2048, 00:11:52.513 "data_size": 63488 00:11:52.513 }, 00:11:52.513 { 00:11:52.513 "name": "pt3", 00:11:52.513 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:52.513 "is_configured": true, 00:11:52.513 "data_offset": 2048, 00:11:52.513 "data_size": 63488 00:11:52.513 } 00:11:52.513 ] 00:11:52.513 }' 00:11:52.513 16:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.513 16:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.096 16:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:53.096 16:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:53.096 16:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.096 16:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.096 16:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.096 16:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:53.096 16:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:53.096 16:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:53.096 16:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.096 16:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.096 [2024-11-05 16:26:05.937413] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:53.096 16:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.096 16:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 2679a4b2-5b2a-49d4-8a1f-bccec4c2e7ff '!=' 2679a4b2-5b2a-49d4-8a1f-bccec4c2e7ff ']' 00:11:53.096 16:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68924 00:11:53.096 16:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 68924 ']' 00:11:53.096 16:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 68924 00:11:53.096 16:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:11:53.096 16:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:53.096 16:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68924 00:11:53.096 killing process with pid 68924 00:11:53.096 16:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:53.096 16:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:53.096 16:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68924' 00:11:53.096 16:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 68924 00:11:53.096 [2024-11-05 16:26:06.003941] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:53.096 [2024-11-05 16:26:06.004053] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:53.096 16:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 68924 00:11:53.096 [2024-11-05 16:26:06.004121] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:53.096 [2024-11-05 16:26:06.004134] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:53.380 [2024-11-05 16:26:06.336902] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:54.754 16:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:54.755 00:11:54.755 real 0m8.060s 00:11:54.755 user 0m12.582s 00:11:54.755 sys 0m1.379s 00:11:54.755 16:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:54.755 16:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.755 ************************************ 00:11:54.755 END TEST raid_superblock_test 00:11:54.755 ************************************ 00:11:54.755 16:26:07 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:11:54.755 16:26:07 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:54.755 16:26:07 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:54.755 16:26:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:54.755 ************************************ 00:11:54.755 START TEST raid_read_error_test 00:11:54.755 ************************************ 00:11:54.755 16:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 3 read 00:11:54.755 16:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:54.755 16:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:54.755 16:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:54.755 16:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:54.755 16:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:54.755 16:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:54.755 16:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:54.755 16:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:54.755 16:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:54.755 16:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:54.755 16:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:54.755 16:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:54.755 16:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:54.755 16:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:54.755 16:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:54.755 16:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:54.755 16:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:54.755 16:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:54.755 16:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:54.755 16:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:54.755 16:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:54.755 16:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:54.755 16:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:54.755 16:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:54.755 16:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.xoYRodfV9A 00:11:54.755 16:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69371 00:11:54.755 16:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:54.755 16:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69371 00:11:54.755 16:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 69371 ']' 00:11:54.755 16:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.755 16:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:54.755 16:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.755 16:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:54.755 16:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.755 [2024-11-05 16:26:07.721554] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:11:54.755 [2024-11-05 16:26:07.721777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69371 ] 00:11:55.014 [2024-11-05 16:26:07.899728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.014 [2024-11-05 16:26:08.022881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.272 [2024-11-05 16:26:08.239607] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.272 [2024-11-05 16:26:08.239682] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.530 16:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:55.530 16:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:11:55.530 16:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:55.530 16:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:55.530 16:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.530 16:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.789 BaseBdev1_malloc 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.789 true 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.789 [2024-11-05 16:26:08.658687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:55.789 [2024-11-05 16:26:08.658751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.789 [2024-11-05 16:26:08.658775] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:55.789 [2024-11-05 16:26:08.658787] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.789 [2024-11-05 16:26:08.661175] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.789 [2024-11-05 16:26:08.661223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:55.789 BaseBdev1 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.789 BaseBdev2_malloc 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.789 true 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.789 [2024-11-05 16:26:08.723007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:55.789 [2024-11-05 16:26:08.723072] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.789 [2024-11-05 16:26:08.723091] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:55.789 [2024-11-05 16:26:08.723102] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.789 [2024-11-05 16:26:08.725318] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.789 [2024-11-05 16:26:08.725451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:55.789 BaseBdev2 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.789 BaseBdev3_malloc 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.789 true 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.789 [2024-11-05 16:26:08.795377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:55.789 [2024-11-05 16:26:08.795527] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.789 [2024-11-05 16:26:08.795553] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:55.789 [2024-11-05 16:26:08.795566] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.789 [2024-11-05 16:26:08.797927] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.789 [2024-11-05 16:26:08.797968] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:55.789 BaseBdev3 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.789 [2024-11-05 16:26:08.803439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:55.789 [2024-11-05 16:26:08.805596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:55.789 [2024-11-05 16:26:08.805744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:55.789 [2024-11-05 16:26:08.805986] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:55.789 [2024-11-05 16:26:08.806001] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:55.789 [2024-11-05 16:26:08.806284] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:55.789 [2024-11-05 16:26:08.806487] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:55.789 [2024-11-05 16:26:08.806501] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:55.789 [2024-11-05 16:26:08.806705] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.789 16:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.789 "name": "raid_bdev1", 00:11:55.789 "uuid": "a197e1da-435d-4ab7-bba5-c59c7c244295", 00:11:55.789 "strip_size_kb": 0, 00:11:55.789 "state": "online", 00:11:55.789 "raid_level": "raid1", 00:11:55.789 "superblock": true, 00:11:55.789 "num_base_bdevs": 3, 00:11:55.789 "num_base_bdevs_discovered": 3, 00:11:55.789 "num_base_bdevs_operational": 3, 00:11:55.789 "base_bdevs_list": [ 00:11:55.789 { 00:11:55.789 "name": "BaseBdev1", 00:11:55.789 "uuid": "8ec22f78-93c5-5925-bfbd-399e301431b3", 00:11:55.790 "is_configured": true, 00:11:55.790 "data_offset": 2048, 00:11:55.790 "data_size": 63488 00:11:55.790 }, 00:11:55.790 { 00:11:55.790 "name": "BaseBdev2", 00:11:55.790 "uuid": "d187a2f6-a429-5404-ab31-cefdd8d846d4", 00:11:55.790 "is_configured": true, 00:11:55.790 "data_offset": 2048, 00:11:55.790 "data_size": 63488 00:11:55.790 }, 00:11:55.790 { 00:11:55.790 "name": "BaseBdev3", 00:11:55.790 "uuid": "062f8631-0d9a-50d0-a4e6-1504861523c4", 00:11:55.790 "is_configured": true, 00:11:55.790 "data_offset": 2048, 00:11:55.790 "data_size": 63488 00:11:55.790 } 00:11:55.790 ] 00:11:55.790 }' 00:11:55.790 16:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.790 16:26:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.355 16:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:56.355 16:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:56.355 [2024-11-05 16:26:09.380195] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:57.291 16:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:57.291 16:26:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.291 16:26:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.291 16:26:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.291 16:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:57.291 16:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:57.291 16:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:57.291 16:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:57.291 16:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:57.291 16:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.291 16:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.291 16:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.291 16:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.291 16:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:57.291 16:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.291 16:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.291 16:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.291 16:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.291 16:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.291 16:26:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.291 16:26:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.291 16:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.291 16:26:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.291 16:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.291 "name": "raid_bdev1", 00:11:57.291 "uuid": "a197e1da-435d-4ab7-bba5-c59c7c244295", 00:11:57.291 "strip_size_kb": 0, 00:11:57.291 "state": "online", 00:11:57.291 "raid_level": "raid1", 00:11:57.291 "superblock": true, 00:11:57.291 "num_base_bdevs": 3, 00:11:57.291 "num_base_bdevs_discovered": 3, 00:11:57.291 "num_base_bdevs_operational": 3, 00:11:57.291 "base_bdevs_list": [ 00:11:57.291 { 00:11:57.291 "name": "BaseBdev1", 00:11:57.291 "uuid": "8ec22f78-93c5-5925-bfbd-399e301431b3", 00:11:57.291 "is_configured": true, 00:11:57.291 "data_offset": 2048, 00:11:57.291 "data_size": 63488 00:11:57.291 }, 00:11:57.291 { 00:11:57.291 "name": "BaseBdev2", 00:11:57.291 "uuid": "d187a2f6-a429-5404-ab31-cefdd8d846d4", 00:11:57.291 "is_configured": true, 00:11:57.291 "data_offset": 2048, 00:11:57.291 "data_size": 63488 00:11:57.291 }, 00:11:57.291 { 00:11:57.291 "name": "BaseBdev3", 00:11:57.291 "uuid": "062f8631-0d9a-50d0-a4e6-1504861523c4", 00:11:57.291 "is_configured": true, 00:11:57.291 "data_offset": 2048, 00:11:57.291 "data_size": 63488 00:11:57.291 } 00:11:57.291 ] 00:11:57.291 }' 00:11:57.291 16:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.291 16:26:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.858 16:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:57.858 16:26:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.858 16:26:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.858 [2024-11-05 16:26:10.744467] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:57.858 [2024-11-05 16:26:10.744536] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:57.858 [2024-11-05 16:26:10.747634] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:57.858 [2024-11-05 16:26:10.747686] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.858 [2024-11-05 16:26:10.747799] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:57.858 [2024-11-05 16:26:10.747811] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:57.858 { 00:11:57.858 "results": [ 00:11:57.858 { 00:11:57.858 "job": "raid_bdev1", 00:11:57.858 "core_mask": "0x1", 00:11:57.858 "workload": "randrw", 00:11:57.858 "percentage": 50, 00:11:57.858 "status": "finished", 00:11:57.858 "queue_depth": 1, 00:11:57.858 "io_size": 131072, 00:11:57.858 "runtime": 1.364702, 00:11:57.858 "iops": 12192.405374946326, 00:11:57.858 "mibps": 1524.0506718682907, 00:11:57.858 "io_failed": 0, 00:11:57.858 "io_timeout": 0, 00:11:57.858 "avg_latency_us": 79.05939006348791, 00:11:57.858 "min_latency_us": 25.3764192139738, 00:11:57.858 "max_latency_us": 1974.665502183406 00:11:57.858 } 00:11:57.858 ], 00:11:57.858 "core_count": 1 00:11:57.858 } 00:11:57.858 16:26:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.858 16:26:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69371 00:11:57.858 16:26:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 69371 ']' 00:11:57.858 16:26:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 69371 00:11:57.858 16:26:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:11:57.858 16:26:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:57.858 16:26:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69371 00:11:57.858 killing process with pid 69371 00:11:57.858 16:26:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:57.858 16:26:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:57.858 16:26:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69371' 00:11:57.858 16:26:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 69371 00:11:57.858 [2024-11-05 16:26:10.783058] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:57.858 16:26:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 69371 00:11:58.117 [2024-11-05 16:26:11.051745] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:59.492 16:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.xoYRodfV9A 00:11:59.492 16:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:59.492 16:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:59.492 16:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:59.492 16:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:59.492 16:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:59.492 16:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:59.492 16:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:59.492 ************************************ 00:11:59.492 END TEST raid_read_error_test 00:11:59.492 ************************************ 00:11:59.492 00:11:59.492 real 0m4.753s 00:11:59.492 user 0m5.665s 00:11:59.492 sys 0m0.564s 00:11:59.492 16:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:59.492 16:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.492 16:26:12 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:11:59.492 16:26:12 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:59.492 16:26:12 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:59.492 16:26:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:59.492 ************************************ 00:11:59.492 START TEST raid_write_error_test 00:11:59.492 ************************************ 00:11:59.492 16:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 3 write 00:11:59.492 16:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:59.493 16:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:59.493 16:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:59.493 16:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:59.493 16:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:59.493 16:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:59.493 16:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:59.493 16:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:59.493 16:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:59.493 16:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:59.493 16:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:59.493 16:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:59.493 16:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:59.493 16:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:59.493 16:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:59.493 16:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:59.493 16:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:59.493 16:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:59.493 16:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:59.493 16:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:59.493 16:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:59.493 16:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:59.493 16:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:59.493 16:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:59.493 16:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.dicPg2oMk2 00:11:59.493 16:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69511 00:11:59.493 16:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69511 00:11:59.493 16:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:59.493 16:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 69511 ']' 00:11:59.493 16:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.493 16:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:59.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.493 16:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.493 16:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:59.493 16:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.493 [2024-11-05 16:26:12.549190] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:11:59.493 [2024-11-05 16:26:12.549314] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69511 ] 00:11:59.753 [2024-11-05 16:26:12.727489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.013 [2024-11-05 16:26:12.845236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.013 [2024-11-05 16:26:13.064766] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:00.013 [2024-11-05 16:26:13.064828] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:00.582 16:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:00.582 16:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:12:00.582 16:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:00.582 16:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:00.582 16:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.582 16:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.582 BaseBdev1_malloc 00:12:00.582 16:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.582 16:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:00.582 16:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.582 16:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.582 true 00:12:00.582 16:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.582 16:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:00.582 16:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.582 16:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.582 [2024-11-05 16:26:13.520383] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:00.582 [2024-11-05 16:26:13.520442] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.582 [2024-11-05 16:26:13.520463] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:00.582 [2024-11-05 16:26:13.520474] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.582 [2024-11-05 16:26:13.522885] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.582 [2024-11-05 16:26:13.522926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:00.582 BaseBdev1 00:12:00.582 16:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.582 16:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:00.582 16:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:00.582 16:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.582 16:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.582 BaseBdev2_malloc 00:12:00.582 16:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.582 16:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:00.582 16:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.582 16:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.582 true 00:12:00.582 16:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.582 16:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:00.582 16:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.582 16:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.582 [2024-11-05 16:26:13.590396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:00.582 [2024-11-05 16:26:13.590458] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.582 [2024-11-05 16:26:13.590477] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:00.582 [2024-11-05 16:26:13.590489] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.582 [2024-11-05 16:26:13.592842] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.582 [2024-11-05 16:26:13.592884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:00.582 BaseBdev2 00:12:00.582 16:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.582 16:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:00.582 16:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:00.582 16:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.582 16:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.582 BaseBdev3_malloc 00:12:00.582 16:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.582 16:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:00.582 16:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.582 16:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.582 true 00:12:00.582 16:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.582 16:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:00.582 16:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.582 16:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.582 [2024-11-05 16:26:13.670049] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:00.582 [2024-11-05 16:26:13.670114] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.582 [2024-11-05 16:26:13.670137] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:00.582 [2024-11-05 16:26:13.670149] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.842 [2024-11-05 16:26:13.672642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.842 [2024-11-05 16:26:13.672687] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:00.842 BaseBdev3 00:12:00.842 16:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.842 16:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:00.842 16:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.842 16:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.842 [2024-11-05 16:26:13.682084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:00.842 [2024-11-05 16:26:13.683972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:00.842 [2024-11-05 16:26:13.684048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:00.842 [2024-11-05 16:26:13.684266] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:00.842 [2024-11-05 16:26:13.684279] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:00.842 [2024-11-05 16:26:13.684577] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:00.842 [2024-11-05 16:26:13.684777] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:00.842 [2024-11-05 16:26:13.684801] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:00.842 [2024-11-05 16:26:13.685016] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:00.842 16:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.842 16:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:00.842 16:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:00.842 16:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:00.842 16:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.842 16:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.842 16:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:00.842 16:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.842 16:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.842 16:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.842 16:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.842 16:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.842 16:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.842 16:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.842 16:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.842 16:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.842 16:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.842 "name": "raid_bdev1", 00:12:00.842 "uuid": "45f033e0-d331-4bfd-bffa-f62ba42051e0", 00:12:00.842 "strip_size_kb": 0, 00:12:00.842 "state": "online", 00:12:00.842 "raid_level": "raid1", 00:12:00.842 "superblock": true, 00:12:00.842 "num_base_bdevs": 3, 00:12:00.842 "num_base_bdevs_discovered": 3, 00:12:00.842 "num_base_bdevs_operational": 3, 00:12:00.842 "base_bdevs_list": [ 00:12:00.842 { 00:12:00.842 "name": "BaseBdev1", 00:12:00.842 "uuid": "6c0afa26-33f7-5e4e-9fad-12669275a883", 00:12:00.842 "is_configured": true, 00:12:00.842 "data_offset": 2048, 00:12:00.842 "data_size": 63488 00:12:00.842 }, 00:12:00.842 { 00:12:00.842 "name": "BaseBdev2", 00:12:00.842 "uuid": "430b9717-c90c-543c-8678-9df5ceeb84ce", 00:12:00.842 "is_configured": true, 00:12:00.842 "data_offset": 2048, 00:12:00.842 "data_size": 63488 00:12:00.842 }, 00:12:00.842 { 00:12:00.842 "name": "BaseBdev3", 00:12:00.842 "uuid": "87b2f7f2-2d1a-5f7d-9144-31e6d39cd37e", 00:12:00.842 "is_configured": true, 00:12:00.842 "data_offset": 2048, 00:12:00.842 "data_size": 63488 00:12:00.842 } 00:12:00.842 ] 00:12:00.842 }' 00:12:00.842 16:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.842 16:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.244 16:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:01.244 16:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:01.244 [2024-11-05 16:26:14.226645] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:12:02.183 16:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:02.183 16:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.183 16:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.183 [2024-11-05 16:26:15.130709] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:02.183 [2024-11-05 16:26:15.130768] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:02.183 [2024-11-05 16:26:15.130987] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:12:02.183 16:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.183 16:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:02.183 16:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:02.183 16:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:02.183 16:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:12:02.183 16:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:02.183 16:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:02.183 16:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.183 16:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.183 16:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.183 16:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:02.183 16:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.183 16:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.183 16:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.183 16:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.183 16:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.183 16:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.183 16:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.183 16:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.183 16:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.183 16:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.183 "name": "raid_bdev1", 00:12:02.183 "uuid": "45f033e0-d331-4bfd-bffa-f62ba42051e0", 00:12:02.183 "strip_size_kb": 0, 00:12:02.183 "state": "online", 00:12:02.183 "raid_level": "raid1", 00:12:02.183 "superblock": true, 00:12:02.183 "num_base_bdevs": 3, 00:12:02.183 "num_base_bdevs_discovered": 2, 00:12:02.183 "num_base_bdevs_operational": 2, 00:12:02.183 "base_bdevs_list": [ 00:12:02.183 { 00:12:02.183 "name": null, 00:12:02.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.183 "is_configured": false, 00:12:02.183 "data_offset": 0, 00:12:02.183 "data_size": 63488 00:12:02.183 }, 00:12:02.183 { 00:12:02.183 "name": "BaseBdev2", 00:12:02.183 "uuid": "430b9717-c90c-543c-8678-9df5ceeb84ce", 00:12:02.183 "is_configured": true, 00:12:02.183 "data_offset": 2048, 00:12:02.183 "data_size": 63488 00:12:02.183 }, 00:12:02.183 { 00:12:02.183 "name": "BaseBdev3", 00:12:02.183 "uuid": "87b2f7f2-2d1a-5f7d-9144-31e6d39cd37e", 00:12:02.183 "is_configured": true, 00:12:02.183 "data_offset": 2048, 00:12:02.183 "data_size": 63488 00:12:02.183 } 00:12:02.183 ] 00:12:02.183 }' 00:12:02.183 16:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.183 16:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.750 16:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:02.750 16:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.750 16:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.750 [2024-11-05 16:26:15.601509] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:02.750 [2024-11-05 16:26:15.601636] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:02.750 [2024-11-05 16:26:15.604889] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:02.750 [2024-11-05 16:26:15.605015] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:02.750 [2024-11-05 16:26:15.605159] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:02.750 [2024-11-05 16:26:15.605232] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:02.750 16:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.750 { 00:12:02.750 "results": [ 00:12:02.750 { 00:12:02.750 "job": "raid_bdev1", 00:12:02.750 "core_mask": "0x1", 00:12:02.750 "workload": "randrw", 00:12:02.750 "percentage": 50, 00:12:02.750 "status": "finished", 00:12:02.750 "queue_depth": 1, 00:12:02.750 "io_size": 131072, 00:12:02.750 "runtime": 1.375631, 00:12:02.750 "iops": 13542.87596019572, 00:12:02.750 "mibps": 1692.859495024465, 00:12:02.750 "io_failed": 0, 00:12:02.750 "io_timeout": 0, 00:12:02.750 "avg_latency_us": 70.8322351843648, 00:12:02.750 "min_latency_us": 25.152838427947597, 00:12:02.750 "max_latency_us": 1688.482096069869 00:12:02.750 } 00:12:02.750 ], 00:12:02.750 "core_count": 1 00:12:02.750 } 00:12:02.750 16:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69511 00:12:02.750 16:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 69511 ']' 00:12:02.750 16:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 69511 00:12:02.750 16:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:12:02.750 16:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:02.750 16:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69511 00:12:02.750 16:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:02.750 16:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:02.750 16:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69511' 00:12:02.750 killing process with pid 69511 00:12:02.750 16:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 69511 00:12:02.750 [2024-11-05 16:26:15.642773] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:02.750 16:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 69511 00:12:03.008 [2024-11-05 16:26:15.895370] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:04.401 16:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:04.401 16:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:04.401 16:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.dicPg2oMk2 00:12:04.401 16:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:04.401 16:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:04.401 16:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:04.401 16:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:04.401 16:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:04.401 00:12:04.401 real 0m4.800s 00:12:04.401 user 0m5.741s 00:12:04.401 sys 0m0.557s 00:12:04.401 ************************************ 00:12:04.401 END TEST raid_write_error_test 00:12:04.401 ************************************ 00:12:04.401 16:26:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:04.401 16:26:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.401 16:26:17 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:12:04.401 16:26:17 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:04.401 16:26:17 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:12:04.401 16:26:17 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:04.401 16:26:17 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:04.401 16:26:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:04.401 ************************************ 00:12:04.401 START TEST raid_state_function_test 00:12:04.401 ************************************ 00:12:04.401 16:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 4 false 00:12:04.401 16:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:04.401 16:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:04.401 16:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:04.401 16:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:04.401 16:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:04.401 16:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:04.401 16:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:04.401 16:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:04.401 16:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:04.401 16:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:04.401 16:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:04.401 16:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:04.401 16:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:04.401 16:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:04.401 16:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:04.401 16:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:04.401 16:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:04.401 16:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:04.401 16:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:04.401 16:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:04.401 16:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:04.401 16:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:04.401 16:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:04.401 16:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:04.401 16:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:04.401 16:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:04.401 16:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:04.401 16:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:04.401 16:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:04.401 16:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69660 00:12:04.401 16:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:04.401 16:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69660' 00:12:04.401 Process raid pid: 69660 00:12:04.401 16:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69660 00:12:04.401 16:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 69660 ']' 00:12:04.401 16:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.401 16:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:04.402 16:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.402 16:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:04.402 16:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.402 [2024-11-05 16:26:17.410265] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:12:04.402 [2024-11-05 16:26:17.410484] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:04.668 [2024-11-05 16:26:17.588148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.668 [2024-11-05 16:26:17.724139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.937 [2024-11-05 16:26:17.959233] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:04.937 [2024-11-05 16:26:17.959386] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:05.536 16:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:05.536 16:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:12:05.536 16:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:05.536 16:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.536 16:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.536 [2024-11-05 16:26:18.319582] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:05.536 [2024-11-05 16:26:18.319707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:05.536 [2024-11-05 16:26:18.319749] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:05.536 [2024-11-05 16:26:18.319778] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:05.536 [2024-11-05 16:26:18.319829] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:05.536 [2024-11-05 16:26:18.319856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:05.536 [2024-11-05 16:26:18.319876] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:05.536 [2024-11-05 16:26:18.319948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:05.536 16:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.536 16:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:05.536 16:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.536 16:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.536 16:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:05.536 16:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.536 16:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.536 16:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.536 16:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.536 16:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.536 16:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.536 16:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.536 16:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.536 16:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.536 16:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.536 16:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.536 16:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.536 "name": "Existed_Raid", 00:12:05.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.536 "strip_size_kb": 64, 00:12:05.536 "state": "configuring", 00:12:05.536 "raid_level": "raid0", 00:12:05.536 "superblock": false, 00:12:05.536 "num_base_bdevs": 4, 00:12:05.536 "num_base_bdevs_discovered": 0, 00:12:05.536 "num_base_bdevs_operational": 4, 00:12:05.536 "base_bdevs_list": [ 00:12:05.536 { 00:12:05.536 "name": "BaseBdev1", 00:12:05.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.536 "is_configured": false, 00:12:05.536 "data_offset": 0, 00:12:05.536 "data_size": 0 00:12:05.536 }, 00:12:05.536 { 00:12:05.536 "name": "BaseBdev2", 00:12:05.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.536 "is_configured": false, 00:12:05.536 "data_offset": 0, 00:12:05.536 "data_size": 0 00:12:05.536 }, 00:12:05.536 { 00:12:05.536 "name": "BaseBdev3", 00:12:05.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.536 "is_configured": false, 00:12:05.536 "data_offset": 0, 00:12:05.536 "data_size": 0 00:12:05.536 }, 00:12:05.536 { 00:12:05.536 "name": "BaseBdev4", 00:12:05.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.536 "is_configured": false, 00:12:05.536 "data_offset": 0, 00:12:05.536 "data_size": 0 00:12:05.536 } 00:12:05.536 ] 00:12:05.536 }' 00:12:05.536 16:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.536 16:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.795 16:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:05.795 16:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.795 16:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.795 [2024-11-05 16:26:18.802709] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:05.795 [2024-11-05 16:26:18.802837] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:05.795 16:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.795 16:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:05.795 16:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.795 16:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.795 [2024-11-05 16:26:18.814705] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:05.795 [2024-11-05 16:26:18.814759] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:05.795 [2024-11-05 16:26:18.814769] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:05.795 [2024-11-05 16:26:18.814780] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:05.795 [2024-11-05 16:26:18.814787] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:05.795 [2024-11-05 16:26:18.814797] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:05.795 [2024-11-05 16:26:18.814804] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:05.795 [2024-11-05 16:26:18.814813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:05.795 16:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.795 16:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:05.795 16:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.795 16:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.795 [2024-11-05 16:26:18.869367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:05.795 BaseBdev1 00:12:05.795 16:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.795 16:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:05.795 16:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:05.795 16:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:05.795 16:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:05.795 16:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:05.795 16:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:05.795 16:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:05.795 16:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.795 16:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.795 16:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.795 16:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:05.795 16:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.795 16:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.055 [ 00:12:06.055 { 00:12:06.055 "name": "BaseBdev1", 00:12:06.055 "aliases": [ 00:12:06.055 "cfae2714-a0c8-4bfd-a83a-02c5eefaa9ad" 00:12:06.055 ], 00:12:06.055 "product_name": "Malloc disk", 00:12:06.055 "block_size": 512, 00:12:06.055 "num_blocks": 65536, 00:12:06.055 "uuid": "cfae2714-a0c8-4bfd-a83a-02c5eefaa9ad", 00:12:06.055 "assigned_rate_limits": { 00:12:06.055 "rw_ios_per_sec": 0, 00:12:06.055 "rw_mbytes_per_sec": 0, 00:12:06.055 "r_mbytes_per_sec": 0, 00:12:06.055 "w_mbytes_per_sec": 0 00:12:06.055 }, 00:12:06.055 "claimed": true, 00:12:06.055 "claim_type": "exclusive_write", 00:12:06.055 "zoned": false, 00:12:06.055 "supported_io_types": { 00:12:06.055 "read": true, 00:12:06.055 "write": true, 00:12:06.055 "unmap": true, 00:12:06.055 "flush": true, 00:12:06.055 "reset": true, 00:12:06.055 "nvme_admin": false, 00:12:06.055 "nvme_io": false, 00:12:06.055 "nvme_io_md": false, 00:12:06.055 "write_zeroes": true, 00:12:06.055 "zcopy": true, 00:12:06.055 "get_zone_info": false, 00:12:06.055 "zone_management": false, 00:12:06.055 "zone_append": false, 00:12:06.055 "compare": false, 00:12:06.055 "compare_and_write": false, 00:12:06.055 "abort": true, 00:12:06.055 "seek_hole": false, 00:12:06.055 "seek_data": false, 00:12:06.055 "copy": true, 00:12:06.055 "nvme_iov_md": false 00:12:06.055 }, 00:12:06.055 "memory_domains": [ 00:12:06.055 { 00:12:06.055 "dma_device_id": "system", 00:12:06.055 "dma_device_type": 1 00:12:06.055 }, 00:12:06.055 { 00:12:06.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.055 "dma_device_type": 2 00:12:06.055 } 00:12:06.055 ], 00:12:06.055 "driver_specific": {} 00:12:06.055 } 00:12:06.055 ] 00:12:06.055 16:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.055 16:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:06.055 16:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:06.055 16:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.055 16:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.055 16:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:06.055 16:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:06.056 16:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:06.056 16:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.056 16:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.056 16:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.056 16:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.056 16:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.056 16:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.056 16:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.056 16:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.056 16:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.056 16:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.056 "name": "Existed_Raid", 00:12:06.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.056 "strip_size_kb": 64, 00:12:06.056 "state": "configuring", 00:12:06.056 "raid_level": "raid0", 00:12:06.056 "superblock": false, 00:12:06.056 "num_base_bdevs": 4, 00:12:06.056 "num_base_bdevs_discovered": 1, 00:12:06.056 "num_base_bdevs_operational": 4, 00:12:06.056 "base_bdevs_list": [ 00:12:06.056 { 00:12:06.056 "name": "BaseBdev1", 00:12:06.056 "uuid": "cfae2714-a0c8-4bfd-a83a-02c5eefaa9ad", 00:12:06.056 "is_configured": true, 00:12:06.056 "data_offset": 0, 00:12:06.056 "data_size": 65536 00:12:06.056 }, 00:12:06.056 { 00:12:06.056 "name": "BaseBdev2", 00:12:06.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.056 "is_configured": false, 00:12:06.056 "data_offset": 0, 00:12:06.056 "data_size": 0 00:12:06.056 }, 00:12:06.056 { 00:12:06.056 "name": "BaseBdev3", 00:12:06.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.056 "is_configured": false, 00:12:06.056 "data_offset": 0, 00:12:06.056 "data_size": 0 00:12:06.056 }, 00:12:06.056 { 00:12:06.056 "name": "BaseBdev4", 00:12:06.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.056 "is_configured": false, 00:12:06.056 "data_offset": 0, 00:12:06.056 "data_size": 0 00:12:06.056 } 00:12:06.056 ] 00:12:06.056 }' 00:12:06.056 16:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.056 16:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.315 16:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:06.315 16:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.315 16:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.315 [2024-11-05 16:26:19.368650] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:06.315 [2024-11-05 16:26:19.368779] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:06.315 16:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.315 16:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:06.315 16:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.315 16:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.315 [2024-11-05 16:26:19.380728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:06.315 [2024-11-05 16:26:19.382878] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:06.315 [2024-11-05 16:26:19.382970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:06.315 [2024-11-05 16:26:19.383004] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:06.315 [2024-11-05 16:26:19.383032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:06.315 [2024-11-05 16:26:19.383054] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:06.315 [2024-11-05 16:26:19.383078] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:06.315 16:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.315 16:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:06.315 16:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:06.315 16:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:06.315 16:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.315 16:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.315 16:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:06.315 16:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:06.315 16:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:06.315 16:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.315 16:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.315 16:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.315 16:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.315 16:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.315 16:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.315 16:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.315 16:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.574 16:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.574 16:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.574 "name": "Existed_Raid", 00:12:06.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.574 "strip_size_kb": 64, 00:12:06.574 "state": "configuring", 00:12:06.574 "raid_level": "raid0", 00:12:06.574 "superblock": false, 00:12:06.574 "num_base_bdevs": 4, 00:12:06.574 "num_base_bdevs_discovered": 1, 00:12:06.574 "num_base_bdevs_operational": 4, 00:12:06.574 "base_bdevs_list": [ 00:12:06.574 { 00:12:06.574 "name": "BaseBdev1", 00:12:06.574 "uuid": "cfae2714-a0c8-4bfd-a83a-02c5eefaa9ad", 00:12:06.574 "is_configured": true, 00:12:06.574 "data_offset": 0, 00:12:06.574 "data_size": 65536 00:12:06.574 }, 00:12:06.574 { 00:12:06.574 "name": "BaseBdev2", 00:12:06.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.574 "is_configured": false, 00:12:06.574 "data_offset": 0, 00:12:06.574 "data_size": 0 00:12:06.574 }, 00:12:06.574 { 00:12:06.574 "name": "BaseBdev3", 00:12:06.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.574 "is_configured": false, 00:12:06.574 "data_offset": 0, 00:12:06.574 "data_size": 0 00:12:06.574 }, 00:12:06.574 { 00:12:06.574 "name": "BaseBdev4", 00:12:06.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.574 "is_configured": false, 00:12:06.574 "data_offset": 0, 00:12:06.574 "data_size": 0 00:12:06.574 } 00:12:06.574 ] 00:12:06.574 }' 00:12:06.574 16:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.574 16:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.832 16:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:06.832 16:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.832 16:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.832 [2024-11-05 16:26:19.861884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:06.832 BaseBdev2 00:12:06.832 16:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.832 16:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:06.832 16:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:06.832 16:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:06.832 16:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:06.832 16:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:06.832 16:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:06.832 16:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:06.832 16:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.832 16:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.832 16:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.832 16:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:06.832 16:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.832 16:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.832 [ 00:12:06.832 { 00:12:06.832 "name": "BaseBdev2", 00:12:06.832 "aliases": [ 00:12:06.832 "b481647c-d2d3-42a6-9060-e39e6a4ed2a9" 00:12:06.832 ], 00:12:06.832 "product_name": "Malloc disk", 00:12:06.832 "block_size": 512, 00:12:06.832 "num_blocks": 65536, 00:12:06.832 "uuid": "b481647c-d2d3-42a6-9060-e39e6a4ed2a9", 00:12:06.832 "assigned_rate_limits": { 00:12:06.832 "rw_ios_per_sec": 0, 00:12:06.832 "rw_mbytes_per_sec": 0, 00:12:06.832 "r_mbytes_per_sec": 0, 00:12:06.832 "w_mbytes_per_sec": 0 00:12:06.832 }, 00:12:06.832 "claimed": true, 00:12:06.832 "claim_type": "exclusive_write", 00:12:06.832 "zoned": false, 00:12:06.832 "supported_io_types": { 00:12:06.832 "read": true, 00:12:06.832 "write": true, 00:12:06.832 "unmap": true, 00:12:06.832 "flush": true, 00:12:06.832 "reset": true, 00:12:06.832 "nvme_admin": false, 00:12:06.832 "nvme_io": false, 00:12:06.832 "nvme_io_md": false, 00:12:06.832 "write_zeroes": true, 00:12:06.832 "zcopy": true, 00:12:06.832 "get_zone_info": false, 00:12:06.832 "zone_management": false, 00:12:06.832 "zone_append": false, 00:12:06.832 "compare": false, 00:12:06.832 "compare_and_write": false, 00:12:06.832 "abort": true, 00:12:06.832 "seek_hole": false, 00:12:06.832 "seek_data": false, 00:12:06.832 "copy": true, 00:12:06.832 "nvme_iov_md": false 00:12:06.832 }, 00:12:06.832 "memory_domains": [ 00:12:06.832 { 00:12:06.832 "dma_device_id": "system", 00:12:06.832 "dma_device_type": 1 00:12:06.832 }, 00:12:06.832 { 00:12:06.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.832 "dma_device_type": 2 00:12:06.832 } 00:12:06.832 ], 00:12:06.832 "driver_specific": {} 00:12:06.832 } 00:12:06.832 ] 00:12:06.832 16:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.832 16:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:06.832 16:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:06.832 16:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:06.832 16:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:06.832 16:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.832 16:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.832 16:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:06.832 16:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:06.832 16:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:06.832 16:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.832 16:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.832 16:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.832 16:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.832 16:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.832 16:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.832 16:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.832 16:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.090 16:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.090 16:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.090 "name": "Existed_Raid", 00:12:07.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.090 "strip_size_kb": 64, 00:12:07.090 "state": "configuring", 00:12:07.090 "raid_level": "raid0", 00:12:07.090 "superblock": false, 00:12:07.090 "num_base_bdevs": 4, 00:12:07.090 "num_base_bdevs_discovered": 2, 00:12:07.090 "num_base_bdevs_operational": 4, 00:12:07.090 "base_bdevs_list": [ 00:12:07.090 { 00:12:07.090 "name": "BaseBdev1", 00:12:07.090 "uuid": "cfae2714-a0c8-4bfd-a83a-02c5eefaa9ad", 00:12:07.090 "is_configured": true, 00:12:07.090 "data_offset": 0, 00:12:07.090 "data_size": 65536 00:12:07.090 }, 00:12:07.090 { 00:12:07.090 "name": "BaseBdev2", 00:12:07.091 "uuid": "b481647c-d2d3-42a6-9060-e39e6a4ed2a9", 00:12:07.091 "is_configured": true, 00:12:07.091 "data_offset": 0, 00:12:07.091 "data_size": 65536 00:12:07.091 }, 00:12:07.091 { 00:12:07.091 "name": "BaseBdev3", 00:12:07.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.091 "is_configured": false, 00:12:07.091 "data_offset": 0, 00:12:07.091 "data_size": 0 00:12:07.091 }, 00:12:07.091 { 00:12:07.091 "name": "BaseBdev4", 00:12:07.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.091 "is_configured": false, 00:12:07.091 "data_offset": 0, 00:12:07.091 "data_size": 0 00:12:07.091 } 00:12:07.091 ] 00:12:07.091 }' 00:12:07.091 16:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.091 16:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.349 16:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:07.349 16:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.349 16:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.607 [2024-11-05 16:26:20.447171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:07.607 BaseBdev3 00:12:07.607 16:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.607 16:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:07.607 16:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:07.607 16:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:07.607 16:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:07.607 16:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:07.607 16:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:07.607 16:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:07.607 16:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.607 16:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.607 16:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.607 16:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:07.607 16:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.607 16:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.607 [ 00:12:07.607 { 00:12:07.607 "name": "BaseBdev3", 00:12:07.607 "aliases": [ 00:12:07.607 "cc3fbe16-f718-4e94-85d3-ccf3544f075a" 00:12:07.607 ], 00:12:07.607 "product_name": "Malloc disk", 00:12:07.607 "block_size": 512, 00:12:07.607 "num_blocks": 65536, 00:12:07.607 "uuid": "cc3fbe16-f718-4e94-85d3-ccf3544f075a", 00:12:07.607 "assigned_rate_limits": { 00:12:07.607 "rw_ios_per_sec": 0, 00:12:07.607 "rw_mbytes_per_sec": 0, 00:12:07.607 "r_mbytes_per_sec": 0, 00:12:07.607 "w_mbytes_per_sec": 0 00:12:07.607 }, 00:12:07.607 "claimed": true, 00:12:07.607 "claim_type": "exclusive_write", 00:12:07.607 "zoned": false, 00:12:07.607 "supported_io_types": { 00:12:07.607 "read": true, 00:12:07.607 "write": true, 00:12:07.607 "unmap": true, 00:12:07.607 "flush": true, 00:12:07.607 "reset": true, 00:12:07.607 "nvme_admin": false, 00:12:07.607 "nvme_io": false, 00:12:07.607 "nvme_io_md": false, 00:12:07.607 "write_zeroes": true, 00:12:07.607 "zcopy": true, 00:12:07.607 "get_zone_info": false, 00:12:07.607 "zone_management": false, 00:12:07.607 "zone_append": false, 00:12:07.607 "compare": false, 00:12:07.607 "compare_and_write": false, 00:12:07.607 "abort": true, 00:12:07.607 "seek_hole": false, 00:12:07.607 "seek_data": false, 00:12:07.607 "copy": true, 00:12:07.607 "nvme_iov_md": false 00:12:07.607 }, 00:12:07.607 "memory_domains": [ 00:12:07.607 { 00:12:07.607 "dma_device_id": "system", 00:12:07.607 "dma_device_type": 1 00:12:07.607 }, 00:12:07.607 { 00:12:07.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.607 "dma_device_type": 2 00:12:07.607 } 00:12:07.607 ], 00:12:07.607 "driver_specific": {} 00:12:07.607 } 00:12:07.607 ] 00:12:07.607 16:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.607 16:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:07.608 16:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:07.608 16:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:07.608 16:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:07.608 16:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.608 16:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.608 16:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:07.608 16:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.608 16:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.608 16:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.608 16:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.608 16:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.608 16:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.608 16:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.608 16:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.608 16:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.608 16:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.608 16:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.608 16:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.608 "name": "Existed_Raid", 00:12:07.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.608 "strip_size_kb": 64, 00:12:07.608 "state": "configuring", 00:12:07.608 "raid_level": "raid0", 00:12:07.608 "superblock": false, 00:12:07.608 "num_base_bdevs": 4, 00:12:07.608 "num_base_bdevs_discovered": 3, 00:12:07.608 "num_base_bdevs_operational": 4, 00:12:07.608 "base_bdevs_list": [ 00:12:07.608 { 00:12:07.608 "name": "BaseBdev1", 00:12:07.608 "uuid": "cfae2714-a0c8-4bfd-a83a-02c5eefaa9ad", 00:12:07.608 "is_configured": true, 00:12:07.608 "data_offset": 0, 00:12:07.608 "data_size": 65536 00:12:07.608 }, 00:12:07.608 { 00:12:07.608 "name": "BaseBdev2", 00:12:07.608 "uuid": "b481647c-d2d3-42a6-9060-e39e6a4ed2a9", 00:12:07.608 "is_configured": true, 00:12:07.608 "data_offset": 0, 00:12:07.608 "data_size": 65536 00:12:07.608 }, 00:12:07.608 { 00:12:07.608 "name": "BaseBdev3", 00:12:07.608 "uuid": "cc3fbe16-f718-4e94-85d3-ccf3544f075a", 00:12:07.608 "is_configured": true, 00:12:07.608 "data_offset": 0, 00:12:07.608 "data_size": 65536 00:12:07.608 }, 00:12:07.608 { 00:12:07.608 "name": "BaseBdev4", 00:12:07.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.608 "is_configured": false, 00:12:07.608 "data_offset": 0, 00:12:07.608 "data_size": 0 00:12:07.608 } 00:12:07.608 ] 00:12:07.608 }' 00:12:07.608 16:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.608 16:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.866 16:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:07.866 16:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.866 16:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.125 [2024-11-05 16:26:20.962641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:08.125 [2024-11-05 16:26:20.962696] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:08.125 [2024-11-05 16:26:20.962705] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:08.125 [2024-11-05 16:26:20.962989] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:08.125 [2024-11-05 16:26:20.963172] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:08.125 [2024-11-05 16:26:20.963186] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:08.125 [2024-11-05 16:26:20.963485] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:08.125 BaseBdev4 00:12:08.125 16:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.125 16:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:08.125 16:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:08.125 16:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:08.125 16:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:08.125 16:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:08.125 16:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:08.125 16:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:08.125 16:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.126 16:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.126 16:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.126 16:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:08.126 16:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.126 16:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.126 [ 00:12:08.126 { 00:12:08.126 "name": "BaseBdev4", 00:12:08.126 "aliases": [ 00:12:08.126 "d10fe0be-7c3c-4945-b282-9c01511fde85" 00:12:08.126 ], 00:12:08.126 "product_name": "Malloc disk", 00:12:08.126 "block_size": 512, 00:12:08.126 "num_blocks": 65536, 00:12:08.126 "uuid": "d10fe0be-7c3c-4945-b282-9c01511fde85", 00:12:08.126 "assigned_rate_limits": { 00:12:08.126 "rw_ios_per_sec": 0, 00:12:08.126 "rw_mbytes_per_sec": 0, 00:12:08.126 "r_mbytes_per_sec": 0, 00:12:08.126 "w_mbytes_per_sec": 0 00:12:08.126 }, 00:12:08.126 "claimed": true, 00:12:08.126 "claim_type": "exclusive_write", 00:12:08.126 "zoned": false, 00:12:08.126 "supported_io_types": { 00:12:08.126 "read": true, 00:12:08.126 "write": true, 00:12:08.126 "unmap": true, 00:12:08.126 "flush": true, 00:12:08.126 "reset": true, 00:12:08.126 "nvme_admin": false, 00:12:08.126 "nvme_io": false, 00:12:08.126 "nvme_io_md": false, 00:12:08.126 "write_zeroes": true, 00:12:08.126 "zcopy": true, 00:12:08.126 "get_zone_info": false, 00:12:08.126 "zone_management": false, 00:12:08.126 "zone_append": false, 00:12:08.126 "compare": false, 00:12:08.126 "compare_and_write": false, 00:12:08.126 "abort": true, 00:12:08.126 "seek_hole": false, 00:12:08.126 "seek_data": false, 00:12:08.126 "copy": true, 00:12:08.126 "nvme_iov_md": false 00:12:08.126 }, 00:12:08.126 "memory_domains": [ 00:12:08.126 { 00:12:08.126 "dma_device_id": "system", 00:12:08.126 "dma_device_type": 1 00:12:08.126 }, 00:12:08.126 { 00:12:08.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.126 "dma_device_type": 2 00:12:08.126 } 00:12:08.126 ], 00:12:08.126 "driver_specific": {} 00:12:08.126 } 00:12:08.126 ] 00:12:08.126 16:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.126 16:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:08.126 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:08.126 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:08.126 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:08.126 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.126 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:08.126 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:08.126 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.126 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.126 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.126 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.126 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.126 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.126 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.126 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.126 16:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.126 16:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.126 16:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.126 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.126 "name": "Existed_Raid", 00:12:08.126 "uuid": "162c4c5a-332b-4db1-ae54-8a794fd4abba", 00:12:08.126 "strip_size_kb": 64, 00:12:08.126 "state": "online", 00:12:08.126 "raid_level": "raid0", 00:12:08.126 "superblock": false, 00:12:08.126 "num_base_bdevs": 4, 00:12:08.126 "num_base_bdevs_discovered": 4, 00:12:08.126 "num_base_bdevs_operational": 4, 00:12:08.126 "base_bdevs_list": [ 00:12:08.126 { 00:12:08.126 "name": "BaseBdev1", 00:12:08.126 "uuid": "cfae2714-a0c8-4bfd-a83a-02c5eefaa9ad", 00:12:08.126 "is_configured": true, 00:12:08.126 "data_offset": 0, 00:12:08.126 "data_size": 65536 00:12:08.126 }, 00:12:08.126 { 00:12:08.126 "name": "BaseBdev2", 00:12:08.126 "uuid": "b481647c-d2d3-42a6-9060-e39e6a4ed2a9", 00:12:08.126 "is_configured": true, 00:12:08.126 "data_offset": 0, 00:12:08.126 "data_size": 65536 00:12:08.126 }, 00:12:08.126 { 00:12:08.126 "name": "BaseBdev3", 00:12:08.126 "uuid": "cc3fbe16-f718-4e94-85d3-ccf3544f075a", 00:12:08.126 "is_configured": true, 00:12:08.126 "data_offset": 0, 00:12:08.126 "data_size": 65536 00:12:08.126 }, 00:12:08.126 { 00:12:08.126 "name": "BaseBdev4", 00:12:08.126 "uuid": "d10fe0be-7c3c-4945-b282-9c01511fde85", 00:12:08.126 "is_configured": true, 00:12:08.126 "data_offset": 0, 00:12:08.126 "data_size": 65536 00:12:08.126 } 00:12:08.126 ] 00:12:08.126 }' 00:12:08.126 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.126 16:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.693 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:08.693 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:08.693 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:08.693 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:08.693 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:08.693 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:08.693 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:08.693 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:08.693 16:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.693 16:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.693 [2024-11-05 16:26:21.502216] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:08.693 16:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.693 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:08.693 "name": "Existed_Raid", 00:12:08.693 "aliases": [ 00:12:08.693 "162c4c5a-332b-4db1-ae54-8a794fd4abba" 00:12:08.693 ], 00:12:08.693 "product_name": "Raid Volume", 00:12:08.693 "block_size": 512, 00:12:08.693 "num_blocks": 262144, 00:12:08.693 "uuid": "162c4c5a-332b-4db1-ae54-8a794fd4abba", 00:12:08.693 "assigned_rate_limits": { 00:12:08.693 "rw_ios_per_sec": 0, 00:12:08.693 "rw_mbytes_per_sec": 0, 00:12:08.693 "r_mbytes_per_sec": 0, 00:12:08.693 "w_mbytes_per_sec": 0 00:12:08.693 }, 00:12:08.693 "claimed": false, 00:12:08.693 "zoned": false, 00:12:08.693 "supported_io_types": { 00:12:08.693 "read": true, 00:12:08.693 "write": true, 00:12:08.693 "unmap": true, 00:12:08.693 "flush": true, 00:12:08.693 "reset": true, 00:12:08.693 "nvme_admin": false, 00:12:08.693 "nvme_io": false, 00:12:08.693 "nvme_io_md": false, 00:12:08.693 "write_zeroes": true, 00:12:08.693 "zcopy": false, 00:12:08.693 "get_zone_info": false, 00:12:08.693 "zone_management": false, 00:12:08.693 "zone_append": false, 00:12:08.693 "compare": false, 00:12:08.693 "compare_and_write": false, 00:12:08.693 "abort": false, 00:12:08.693 "seek_hole": false, 00:12:08.693 "seek_data": false, 00:12:08.693 "copy": false, 00:12:08.693 "nvme_iov_md": false 00:12:08.693 }, 00:12:08.693 "memory_domains": [ 00:12:08.693 { 00:12:08.693 "dma_device_id": "system", 00:12:08.694 "dma_device_type": 1 00:12:08.694 }, 00:12:08.694 { 00:12:08.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.694 "dma_device_type": 2 00:12:08.694 }, 00:12:08.694 { 00:12:08.694 "dma_device_id": "system", 00:12:08.694 "dma_device_type": 1 00:12:08.694 }, 00:12:08.694 { 00:12:08.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.694 "dma_device_type": 2 00:12:08.694 }, 00:12:08.694 { 00:12:08.694 "dma_device_id": "system", 00:12:08.694 "dma_device_type": 1 00:12:08.694 }, 00:12:08.694 { 00:12:08.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.694 "dma_device_type": 2 00:12:08.694 }, 00:12:08.694 { 00:12:08.694 "dma_device_id": "system", 00:12:08.694 "dma_device_type": 1 00:12:08.694 }, 00:12:08.694 { 00:12:08.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.694 "dma_device_type": 2 00:12:08.694 } 00:12:08.694 ], 00:12:08.694 "driver_specific": { 00:12:08.694 "raid": { 00:12:08.694 "uuid": "162c4c5a-332b-4db1-ae54-8a794fd4abba", 00:12:08.694 "strip_size_kb": 64, 00:12:08.694 "state": "online", 00:12:08.694 "raid_level": "raid0", 00:12:08.694 "superblock": false, 00:12:08.694 "num_base_bdevs": 4, 00:12:08.694 "num_base_bdevs_discovered": 4, 00:12:08.694 "num_base_bdevs_operational": 4, 00:12:08.694 "base_bdevs_list": [ 00:12:08.694 { 00:12:08.694 "name": "BaseBdev1", 00:12:08.694 "uuid": "cfae2714-a0c8-4bfd-a83a-02c5eefaa9ad", 00:12:08.694 "is_configured": true, 00:12:08.694 "data_offset": 0, 00:12:08.694 "data_size": 65536 00:12:08.694 }, 00:12:08.694 { 00:12:08.694 "name": "BaseBdev2", 00:12:08.694 "uuid": "b481647c-d2d3-42a6-9060-e39e6a4ed2a9", 00:12:08.694 "is_configured": true, 00:12:08.694 "data_offset": 0, 00:12:08.694 "data_size": 65536 00:12:08.694 }, 00:12:08.694 { 00:12:08.694 "name": "BaseBdev3", 00:12:08.694 "uuid": "cc3fbe16-f718-4e94-85d3-ccf3544f075a", 00:12:08.694 "is_configured": true, 00:12:08.694 "data_offset": 0, 00:12:08.694 "data_size": 65536 00:12:08.694 }, 00:12:08.694 { 00:12:08.694 "name": "BaseBdev4", 00:12:08.694 "uuid": "d10fe0be-7c3c-4945-b282-9c01511fde85", 00:12:08.694 "is_configured": true, 00:12:08.694 "data_offset": 0, 00:12:08.694 "data_size": 65536 00:12:08.694 } 00:12:08.694 ] 00:12:08.694 } 00:12:08.694 } 00:12:08.694 }' 00:12:08.694 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:08.694 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:08.694 BaseBdev2 00:12:08.694 BaseBdev3 00:12:08.694 BaseBdev4' 00:12:08.694 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.694 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:08.694 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:08.694 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:08.694 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.694 16:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.694 16:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.694 16:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.694 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:08.694 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:08.694 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:08.694 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:08.694 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.694 16:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.694 16:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.694 16:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.694 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:08.694 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:08.694 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:08.694 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.694 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:08.694 16:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.694 16:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.694 16:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.694 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:08.694 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:08.694 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:08.694 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:08.694 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.694 16:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.694 16:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.952 16:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.952 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:08.952 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:08.952 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:08.952 16:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.952 16:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.952 [2024-11-05 16:26:21.825352] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:08.952 [2024-11-05 16:26:21.825393] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:08.953 [2024-11-05 16:26:21.825452] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:08.953 16:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.953 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:08.953 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:08.953 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:08.953 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:08.953 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:08.953 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:12:08.953 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.953 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:08.953 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:08.953 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.953 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:08.953 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.953 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.953 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.953 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.953 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.953 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.953 16:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.953 16:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.953 16:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.953 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.953 "name": "Existed_Raid", 00:12:08.953 "uuid": "162c4c5a-332b-4db1-ae54-8a794fd4abba", 00:12:08.953 "strip_size_kb": 64, 00:12:08.953 "state": "offline", 00:12:08.953 "raid_level": "raid0", 00:12:08.953 "superblock": false, 00:12:08.953 "num_base_bdevs": 4, 00:12:08.953 "num_base_bdevs_discovered": 3, 00:12:08.953 "num_base_bdevs_operational": 3, 00:12:08.953 "base_bdevs_list": [ 00:12:08.953 { 00:12:08.953 "name": null, 00:12:08.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.953 "is_configured": false, 00:12:08.953 "data_offset": 0, 00:12:08.953 "data_size": 65536 00:12:08.953 }, 00:12:08.953 { 00:12:08.953 "name": "BaseBdev2", 00:12:08.953 "uuid": "b481647c-d2d3-42a6-9060-e39e6a4ed2a9", 00:12:08.953 "is_configured": true, 00:12:08.953 "data_offset": 0, 00:12:08.953 "data_size": 65536 00:12:08.953 }, 00:12:08.953 { 00:12:08.953 "name": "BaseBdev3", 00:12:08.953 "uuid": "cc3fbe16-f718-4e94-85d3-ccf3544f075a", 00:12:08.953 "is_configured": true, 00:12:08.953 "data_offset": 0, 00:12:08.953 "data_size": 65536 00:12:08.953 }, 00:12:08.953 { 00:12:08.953 "name": "BaseBdev4", 00:12:08.953 "uuid": "d10fe0be-7c3c-4945-b282-9c01511fde85", 00:12:08.953 "is_configured": true, 00:12:08.953 "data_offset": 0, 00:12:08.953 "data_size": 65536 00:12:08.953 } 00:12:08.953 ] 00:12:08.953 }' 00:12:08.953 16:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.953 16:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.519 16:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:09.519 16:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:09.519 16:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:09.519 16:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.519 16:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.519 16:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.519 16:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.520 16:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:09.520 16:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:09.520 16:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:09.520 16:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.520 16:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.520 [2024-11-05 16:26:22.482755] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:09.520 16:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.520 16:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:09.520 16:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:09.520 16:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.520 16:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.520 16:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:09.520 16:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.778 16:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.779 16:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:09.779 16:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:09.779 16:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:09.779 16:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.779 16:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.779 [2024-11-05 16:26:22.658994] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:09.779 16:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.779 16:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:09.779 16:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:09.779 16:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:09.779 16:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.779 16:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.779 16:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.779 16:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.779 16:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:09.779 16:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:09.779 16:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:09.779 16:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.779 16:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.779 [2024-11-05 16:26:22.825566] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:09.779 [2024-11-05 16:26:22.825674] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:10.038 16:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.038 16:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:10.038 16:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:10.038 16:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.038 16:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:10.038 16:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.038 16:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.038 16:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.038 16:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:10.038 16:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:10.038 16:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:10.038 16:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:10.038 16:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:10.038 16:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:10.038 16:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.038 16:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.038 BaseBdev2 00:12:10.038 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.038 16:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:10.038 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:10.038 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:10.038 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:10.038 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:10.038 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:10.038 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:10.038 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.038 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.038 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.038 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:10.038 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.038 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.038 [ 00:12:10.038 { 00:12:10.038 "name": "BaseBdev2", 00:12:10.038 "aliases": [ 00:12:10.038 "c4d0b1cd-0a02-45e7-8652-512467775f96" 00:12:10.038 ], 00:12:10.038 "product_name": "Malloc disk", 00:12:10.038 "block_size": 512, 00:12:10.038 "num_blocks": 65536, 00:12:10.038 "uuid": "c4d0b1cd-0a02-45e7-8652-512467775f96", 00:12:10.038 "assigned_rate_limits": { 00:12:10.038 "rw_ios_per_sec": 0, 00:12:10.038 "rw_mbytes_per_sec": 0, 00:12:10.038 "r_mbytes_per_sec": 0, 00:12:10.038 "w_mbytes_per_sec": 0 00:12:10.038 }, 00:12:10.038 "claimed": false, 00:12:10.038 "zoned": false, 00:12:10.038 "supported_io_types": { 00:12:10.038 "read": true, 00:12:10.038 "write": true, 00:12:10.038 "unmap": true, 00:12:10.038 "flush": true, 00:12:10.038 "reset": true, 00:12:10.038 "nvme_admin": false, 00:12:10.038 "nvme_io": false, 00:12:10.038 "nvme_io_md": false, 00:12:10.038 "write_zeroes": true, 00:12:10.038 "zcopy": true, 00:12:10.038 "get_zone_info": false, 00:12:10.038 "zone_management": false, 00:12:10.038 "zone_append": false, 00:12:10.038 "compare": false, 00:12:10.038 "compare_and_write": false, 00:12:10.038 "abort": true, 00:12:10.038 "seek_hole": false, 00:12:10.038 "seek_data": false, 00:12:10.038 "copy": true, 00:12:10.038 "nvme_iov_md": false 00:12:10.038 }, 00:12:10.038 "memory_domains": [ 00:12:10.038 { 00:12:10.038 "dma_device_id": "system", 00:12:10.038 "dma_device_type": 1 00:12:10.038 }, 00:12:10.038 { 00:12:10.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.038 "dma_device_type": 2 00:12:10.038 } 00:12:10.038 ], 00:12:10.038 "driver_specific": {} 00:12:10.038 } 00:12:10.038 ] 00:12:10.038 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.038 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:10.038 16:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:10.038 16:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:10.038 16:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:10.038 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.038 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.038 BaseBdev3 00:12:10.038 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.038 16:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:10.038 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:10.038 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:10.038 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:10.038 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:10.038 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:10.038 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:10.038 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.038 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.297 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.297 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:10.297 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.297 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.297 [ 00:12:10.297 { 00:12:10.297 "name": "BaseBdev3", 00:12:10.297 "aliases": [ 00:12:10.297 "98d1ec69-c59b-4fe2-b520-f10c654d422e" 00:12:10.297 ], 00:12:10.297 "product_name": "Malloc disk", 00:12:10.297 "block_size": 512, 00:12:10.297 "num_blocks": 65536, 00:12:10.297 "uuid": "98d1ec69-c59b-4fe2-b520-f10c654d422e", 00:12:10.297 "assigned_rate_limits": { 00:12:10.297 "rw_ios_per_sec": 0, 00:12:10.297 "rw_mbytes_per_sec": 0, 00:12:10.297 "r_mbytes_per_sec": 0, 00:12:10.297 "w_mbytes_per_sec": 0 00:12:10.297 }, 00:12:10.297 "claimed": false, 00:12:10.297 "zoned": false, 00:12:10.297 "supported_io_types": { 00:12:10.297 "read": true, 00:12:10.297 "write": true, 00:12:10.297 "unmap": true, 00:12:10.297 "flush": true, 00:12:10.297 "reset": true, 00:12:10.297 "nvme_admin": false, 00:12:10.297 "nvme_io": false, 00:12:10.297 "nvme_io_md": false, 00:12:10.297 "write_zeroes": true, 00:12:10.297 "zcopy": true, 00:12:10.297 "get_zone_info": false, 00:12:10.297 "zone_management": false, 00:12:10.297 "zone_append": false, 00:12:10.297 "compare": false, 00:12:10.297 "compare_and_write": false, 00:12:10.297 "abort": true, 00:12:10.297 "seek_hole": false, 00:12:10.297 "seek_data": false, 00:12:10.297 "copy": true, 00:12:10.297 "nvme_iov_md": false 00:12:10.297 }, 00:12:10.297 "memory_domains": [ 00:12:10.297 { 00:12:10.297 "dma_device_id": "system", 00:12:10.297 "dma_device_type": 1 00:12:10.297 }, 00:12:10.297 { 00:12:10.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.297 "dma_device_type": 2 00:12:10.297 } 00:12:10.297 ], 00:12:10.297 "driver_specific": {} 00:12:10.297 } 00:12:10.297 ] 00:12:10.297 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.297 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:10.297 16:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:10.297 16:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:10.297 16:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:10.297 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.297 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.297 BaseBdev4 00:12:10.297 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.297 16:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:10.297 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:10.297 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:10.297 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:10.297 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:10.297 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:10.297 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:10.297 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.297 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.297 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.297 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:10.298 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.298 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.298 [ 00:12:10.298 { 00:12:10.298 "name": "BaseBdev4", 00:12:10.298 "aliases": [ 00:12:10.298 "3ee3b113-37c0-4ff2-b09a-c8214939d929" 00:12:10.298 ], 00:12:10.298 "product_name": "Malloc disk", 00:12:10.298 "block_size": 512, 00:12:10.298 "num_blocks": 65536, 00:12:10.298 "uuid": "3ee3b113-37c0-4ff2-b09a-c8214939d929", 00:12:10.298 "assigned_rate_limits": { 00:12:10.298 "rw_ios_per_sec": 0, 00:12:10.298 "rw_mbytes_per_sec": 0, 00:12:10.298 "r_mbytes_per_sec": 0, 00:12:10.298 "w_mbytes_per_sec": 0 00:12:10.298 }, 00:12:10.298 "claimed": false, 00:12:10.298 "zoned": false, 00:12:10.298 "supported_io_types": { 00:12:10.298 "read": true, 00:12:10.298 "write": true, 00:12:10.298 "unmap": true, 00:12:10.298 "flush": true, 00:12:10.298 "reset": true, 00:12:10.298 "nvme_admin": false, 00:12:10.298 "nvme_io": false, 00:12:10.298 "nvme_io_md": false, 00:12:10.298 "write_zeroes": true, 00:12:10.298 "zcopy": true, 00:12:10.298 "get_zone_info": false, 00:12:10.298 "zone_management": false, 00:12:10.298 "zone_append": false, 00:12:10.298 "compare": false, 00:12:10.298 "compare_and_write": false, 00:12:10.298 "abort": true, 00:12:10.298 "seek_hole": false, 00:12:10.298 "seek_data": false, 00:12:10.298 "copy": true, 00:12:10.298 "nvme_iov_md": false 00:12:10.298 }, 00:12:10.298 "memory_domains": [ 00:12:10.298 { 00:12:10.298 "dma_device_id": "system", 00:12:10.298 "dma_device_type": 1 00:12:10.298 }, 00:12:10.298 { 00:12:10.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.298 "dma_device_type": 2 00:12:10.298 } 00:12:10.298 ], 00:12:10.298 "driver_specific": {} 00:12:10.298 } 00:12:10.298 ] 00:12:10.298 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.298 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:10.298 16:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:10.298 16:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:10.298 16:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:10.298 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.298 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.298 [2024-11-05 16:26:23.240389] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:10.298 [2024-11-05 16:26:23.240442] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:10.298 [2024-11-05 16:26:23.240471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:10.298 [2024-11-05 16:26:23.242642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:10.298 [2024-11-05 16:26:23.242703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:10.298 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.298 16:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:10.298 16:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.298 16:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:10.298 16:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:10.298 16:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:10.298 16:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:10.298 16:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.298 16:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.298 16:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.298 16:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.298 16:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.298 16:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.298 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.298 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.298 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.298 16:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.298 "name": "Existed_Raid", 00:12:10.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.298 "strip_size_kb": 64, 00:12:10.298 "state": "configuring", 00:12:10.298 "raid_level": "raid0", 00:12:10.298 "superblock": false, 00:12:10.298 "num_base_bdevs": 4, 00:12:10.298 "num_base_bdevs_discovered": 3, 00:12:10.298 "num_base_bdevs_operational": 4, 00:12:10.298 "base_bdevs_list": [ 00:12:10.298 { 00:12:10.298 "name": "BaseBdev1", 00:12:10.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.298 "is_configured": false, 00:12:10.298 "data_offset": 0, 00:12:10.298 "data_size": 0 00:12:10.298 }, 00:12:10.298 { 00:12:10.298 "name": "BaseBdev2", 00:12:10.298 "uuid": "c4d0b1cd-0a02-45e7-8652-512467775f96", 00:12:10.298 "is_configured": true, 00:12:10.298 "data_offset": 0, 00:12:10.298 "data_size": 65536 00:12:10.298 }, 00:12:10.298 { 00:12:10.298 "name": "BaseBdev3", 00:12:10.298 "uuid": "98d1ec69-c59b-4fe2-b520-f10c654d422e", 00:12:10.298 "is_configured": true, 00:12:10.298 "data_offset": 0, 00:12:10.298 "data_size": 65536 00:12:10.298 }, 00:12:10.298 { 00:12:10.298 "name": "BaseBdev4", 00:12:10.298 "uuid": "3ee3b113-37c0-4ff2-b09a-c8214939d929", 00:12:10.298 "is_configured": true, 00:12:10.298 "data_offset": 0, 00:12:10.298 "data_size": 65536 00:12:10.298 } 00:12:10.298 ] 00:12:10.298 }' 00:12:10.298 16:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.298 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.865 16:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:10.865 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.865 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.865 [2024-11-05 16:26:23.743559] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:10.865 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.865 16:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:10.865 16:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.865 16:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:10.865 16:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:10.866 16:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:10.866 16:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:10.866 16:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.866 16:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.866 16:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.866 16:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.866 16:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.866 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.866 16:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.866 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.866 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.866 16:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.866 "name": "Existed_Raid", 00:12:10.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.866 "strip_size_kb": 64, 00:12:10.866 "state": "configuring", 00:12:10.866 "raid_level": "raid0", 00:12:10.866 "superblock": false, 00:12:10.866 "num_base_bdevs": 4, 00:12:10.866 "num_base_bdevs_discovered": 2, 00:12:10.866 "num_base_bdevs_operational": 4, 00:12:10.866 "base_bdevs_list": [ 00:12:10.866 { 00:12:10.866 "name": "BaseBdev1", 00:12:10.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.866 "is_configured": false, 00:12:10.866 "data_offset": 0, 00:12:10.866 "data_size": 0 00:12:10.866 }, 00:12:10.866 { 00:12:10.866 "name": null, 00:12:10.866 "uuid": "c4d0b1cd-0a02-45e7-8652-512467775f96", 00:12:10.866 "is_configured": false, 00:12:10.866 "data_offset": 0, 00:12:10.866 "data_size": 65536 00:12:10.866 }, 00:12:10.866 { 00:12:10.866 "name": "BaseBdev3", 00:12:10.866 "uuid": "98d1ec69-c59b-4fe2-b520-f10c654d422e", 00:12:10.866 "is_configured": true, 00:12:10.866 "data_offset": 0, 00:12:10.866 "data_size": 65536 00:12:10.866 }, 00:12:10.866 { 00:12:10.866 "name": "BaseBdev4", 00:12:10.866 "uuid": "3ee3b113-37c0-4ff2-b09a-c8214939d929", 00:12:10.866 "is_configured": true, 00:12:10.866 "data_offset": 0, 00:12:10.866 "data_size": 65536 00:12:10.866 } 00:12:10.866 ] 00:12:10.866 }' 00:12:10.866 16:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.866 16:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.125 16:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.125 16:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:11.125 16:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.125 16:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.383 16:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.383 16:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:11.383 16:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:11.383 16:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.383 16:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.384 [2024-11-05 16:26:24.301817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:11.384 BaseBdev1 00:12:11.384 16:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.384 16:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:11.384 16:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:11.384 16:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:11.384 16:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:11.384 16:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:11.384 16:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:11.384 16:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:11.384 16:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.384 16:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.384 16:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.384 16:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:11.384 16:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.384 16:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.384 [ 00:12:11.384 { 00:12:11.384 "name": "BaseBdev1", 00:12:11.384 "aliases": [ 00:12:11.384 "1d42bec9-1209-42c8-97a8-8ba6cc653fc0" 00:12:11.384 ], 00:12:11.384 "product_name": "Malloc disk", 00:12:11.384 "block_size": 512, 00:12:11.384 "num_blocks": 65536, 00:12:11.384 "uuid": "1d42bec9-1209-42c8-97a8-8ba6cc653fc0", 00:12:11.384 "assigned_rate_limits": { 00:12:11.384 "rw_ios_per_sec": 0, 00:12:11.384 "rw_mbytes_per_sec": 0, 00:12:11.384 "r_mbytes_per_sec": 0, 00:12:11.384 "w_mbytes_per_sec": 0 00:12:11.384 }, 00:12:11.384 "claimed": true, 00:12:11.384 "claim_type": "exclusive_write", 00:12:11.384 "zoned": false, 00:12:11.384 "supported_io_types": { 00:12:11.384 "read": true, 00:12:11.384 "write": true, 00:12:11.384 "unmap": true, 00:12:11.384 "flush": true, 00:12:11.384 "reset": true, 00:12:11.384 "nvme_admin": false, 00:12:11.384 "nvme_io": false, 00:12:11.384 "nvme_io_md": false, 00:12:11.384 "write_zeroes": true, 00:12:11.384 "zcopy": true, 00:12:11.384 "get_zone_info": false, 00:12:11.384 "zone_management": false, 00:12:11.384 "zone_append": false, 00:12:11.384 "compare": false, 00:12:11.384 "compare_and_write": false, 00:12:11.384 "abort": true, 00:12:11.384 "seek_hole": false, 00:12:11.384 "seek_data": false, 00:12:11.384 "copy": true, 00:12:11.384 "nvme_iov_md": false 00:12:11.384 }, 00:12:11.384 "memory_domains": [ 00:12:11.384 { 00:12:11.384 "dma_device_id": "system", 00:12:11.384 "dma_device_type": 1 00:12:11.384 }, 00:12:11.384 { 00:12:11.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.384 "dma_device_type": 2 00:12:11.384 } 00:12:11.384 ], 00:12:11.384 "driver_specific": {} 00:12:11.384 } 00:12:11.384 ] 00:12:11.384 16:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.384 16:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:11.384 16:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:11.384 16:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.384 16:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:11.384 16:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:11.384 16:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:11.384 16:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:11.384 16:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.384 16:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.384 16:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.384 16:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.384 16:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.384 16:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.384 16:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.384 16:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.384 16:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.384 16:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.384 "name": "Existed_Raid", 00:12:11.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.384 "strip_size_kb": 64, 00:12:11.384 "state": "configuring", 00:12:11.384 "raid_level": "raid0", 00:12:11.384 "superblock": false, 00:12:11.384 "num_base_bdevs": 4, 00:12:11.384 "num_base_bdevs_discovered": 3, 00:12:11.384 "num_base_bdevs_operational": 4, 00:12:11.384 "base_bdevs_list": [ 00:12:11.384 { 00:12:11.384 "name": "BaseBdev1", 00:12:11.384 "uuid": "1d42bec9-1209-42c8-97a8-8ba6cc653fc0", 00:12:11.384 "is_configured": true, 00:12:11.384 "data_offset": 0, 00:12:11.384 "data_size": 65536 00:12:11.384 }, 00:12:11.384 { 00:12:11.384 "name": null, 00:12:11.384 "uuid": "c4d0b1cd-0a02-45e7-8652-512467775f96", 00:12:11.384 "is_configured": false, 00:12:11.384 "data_offset": 0, 00:12:11.384 "data_size": 65536 00:12:11.384 }, 00:12:11.384 { 00:12:11.384 "name": "BaseBdev3", 00:12:11.384 "uuid": "98d1ec69-c59b-4fe2-b520-f10c654d422e", 00:12:11.384 "is_configured": true, 00:12:11.384 "data_offset": 0, 00:12:11.384 "data_size": 65536 00:12:11.384 }, 00:12:11.384 { 00:12:11.384 "name": "BaseBdev4", 00:12:11.384 "uuid": "3ee3b113-37c0-4ff2-b09a-c8214939d929", 00:12:11.384 "is_configured": true, 00:12:11.384 "data_offset": 0, 00:12:11.384 "data_size": 65536 00:12:11.384 } 00:12:11.384 ] 00:12:11.384 }' 00:12:11.384 16:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.384 16:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.951 16:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.951 16:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.951 16:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.951 16:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:11.951 16:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.951 16:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:11.951 16:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:11.951 16:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.951 16:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.951 [2024-11-05 16:26:24.849010] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:11.951 16:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.951 16:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:11.951 16:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.951 16:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:11.951 16:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:11.951 16:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:11.951 16:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:11.951 16:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.951 16:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.951 16:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.951 16:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.951 16:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.951 16:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.951 16:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.951 16:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.951 16:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.951 16:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.951 "name": "Existed_Raid", 00:12:11.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.951 "strip_size_kb": 64, 00:12:11.951 "state": "configuring", 00:12:11.951 "raid_level": "raid0", 00:12:11.951 "superblock": false, 00:12:11.951 "num_base_bdevs": 4, 00:12:11.951 "num_base_bdevs_discovered": 2, 00:12:11.951 "num_base_bdevs_operational": 4, 00:12:11.951 "base_bdevs_list": [ 00:12:11.951 { 00:12:11.951 "name": "BaseBdev1", 00:12:11.951 "uuid": "1d42bec9-1209-42c8-97a8-8ba6cc653fc0", 00:12:11.951 "is_configured": true, 00:12:11.951 "data_offset": 0, 00:12:11.951 "data_size": 65536 00:12:11.951 }, 00:12:11.951 { 00:12:11.951 "name": null, 00:12:11.951 "uuid": "c4d0b1cd-0a02-45e7-8652-512467775f96", 00:12:11.951 "is_configured": false, 00:12:11.951 "data_offset": 0, 00:12:11.951 "data_size": 65536 00:12:11.951 }, 00:12:11.951 { 00:12:11.951 "name": null, 00:12:11.951 "uuid": "98d1ec69-c59b-4fe2-b520-f10c654d422e", 00:12:11.951 "is_configured": false, 00:12:11.951 "data_offset": 0, 00:12:11.951 "data_size": 65536 00:12:11.951 }, 00:12:11.951 { 00:12:11.951 "name": "BaseBdev4", 00:12:11.951 "uuid": "3ee3b113-37c0-4ff2-b09a-c8214939d929", 00:12:11.951 "is_configured": true, 00:12:11.951 "data_offset": 0, 00:12:11.951 "data_size": 65536 00:12:11.951 } 00:12:11.951 ] 00:12:11.951 }' 00:12:11.951 16:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.951 16:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.518 16:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:12.518 16:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.518 16:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.518 16:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.518 16:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.518 16:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:12.518 16:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:12.518 16:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.518 16:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.518 [2024-11-05 16:26:25.360218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:12.518 16:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.518 16:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:12.518 16:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.518 16:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:12.518 16:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:12.518 16:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:12.518 16:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:12.518 16:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.518 16:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.518 16:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.518 16:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.518 16:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.518 16:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.518 16:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.518 16:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.518 16:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.518 16:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.518 "name": "Existed_Raid", 00:12:12.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.518 "strip_size_kb": 64, 00:12:12.518 "state": "configuring", 00:12:12.519 "raid_level": "raid0", 00:12:12.519 "superblock": false, 00:12:12.519 "num_base_bdevs": 4, 00:12:12.519 "num_base_bdevs_discovered": 3, 00:12:12.519 "num_base_bdevs_operational": 4, 00:12:12.519 "base_bdevs_list": [ 00:12:12.519 { 00:12:12.519 "name": "BaseBdev1", 00:12:12.519 "uuid": "1d42bec9-1209-42c8-97a8-8ba6cc653fc0", 00:12:12.519 "is_configured": true, 00:12:12.519 "data_offset": 0, 00:12:12.519 "data_size": 65536 00:12:12.519 }, 00:12:12.519 { 00:12:12.519 "name": null, 00:12:12.519 "uuid": "c4d0b1cd-0a02-45e7-8652-512467775f96", 00:12:12.519 "is_configured": false, 00:12:12.519 "data_offset": 0, 00:12:12.519 "data_size": 65536 00:12:12.519 }, 00:12:12.519 { 00:12:12.519 "name": "BaseBdev3", 00:12:12.519 "uuid": "98d1ec69-c59b-4fe2-b520-f10c654d422e", 00:12:12.519 "is_configured": true, 00:12:12.519 "data_offset": 0, 00:12:12.519 "data_size": 65536 00:12:12.519 }, 00:12:12.519 { 00:12:12.519 "name": "BaseBdev4", 00:12:12.519 "uuid": "3ee3b113-37c0-4ff2-b09a-c8214939d929", 00:12:12.519 "is_configured": true, 00:12:12.519 "data_offset": 0, 00:12:12.519 "data_size": 65536 00:12:12.519 } 00:12:12.519 ] 00:12:12.519 }' 00:12:12.519 16:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.519 16:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.804 16:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.804 16:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.804 16:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.804 16:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:12.804 16:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.064 16:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:13.064 16:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:13.064 16:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.064 16:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.064 [2024-11-05 16:26:25.911354] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:13.064 16:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.064 16:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:13.064 16:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.064 16:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.064 16:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:13.064 16:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:13.064 16:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.064 16:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.064 16:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.064 16:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.064 16:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.064 16:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.064 16:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.064 16:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.064 16:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.064 16:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.064 16:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.064 "name": "Existed_Raid", 00:12:13.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.064 "strip_size_kb": 64, 00:12:13.064 "state": "configuring", 00:12:13.064 "raid_level": "raid0", 00:12:13.064 "superblock": false, 00:12:13.064 "num_base_bdevs": 4, 00:12:13.064 "num_base_bdevs_discovered": 2, 00:12:13.064 "num_base_bdevs_operational": 4, 00:12:13.064 "base_bdevs_list": [ 00:12:13.064 { 00:12:13.064 "name": null, 00:12:13.064 "uuid": "1d42bec9-1209-42c8-97a8-8ba6cc653fc0", 00:12:13.064 "is_configured": false, 00:12:13.064 "data_offset": 0, 00:12:13.064 "data_size": 65536 00:12:13.064 }, 00:12:13.064 { 00:12:13.064 "name": null, 00:12:13.064 "uuid": "c4d0b1cd-0a02-45e7-8652-512467775f96", 00:12:13.064 "is_configured": false, 00:12:13.064 "data_offset": 0, 00:12:13.064 "data_size": 65536 00:12:13.064 }, 00:12:13.064 { 00:12:13.064 "name": "BaseBdev3", 00:12:13.064 "uuid": "98d1ec69-c59b-4fe2-b520-f10c654d422e", 00:12:13.064 "is_configured": true, 00:12:13.064 "data_offset": 0, 00:12:13.064 "data_size": 65536 00:12:13.064 }, 00:12:13.064 { 00:12:13.064 "name": "BaseBdev4", 00:12:13.064 "uuid": "3ee3b113-37c0-4ff2-b09a-c8214939d929", 00:12:13.064 "is_configured": true, 00:12:13.064 "data_offset": 0, 00:12:13.064 "data_size": 65536 00:12:13.064 } 00:12:13.064 ] 00:12:13.064 }' 00:12:13.064 16:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.064 16:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.634 16:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.634 16:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:13.634 16:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.634 16:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.634 16:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.634 16:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:13.634 16:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:13.634 16:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.634 16:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.634 [2024-11-05 16:26:26.573480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:13.634 16:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.634 16:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:13.634 16:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.634 16:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.634 16:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:13.634 16:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:13.634 16:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.634 16:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.634 16:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.634 16:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.634 16:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.634 16:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.634 16:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.634 16:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.634 16:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.634 16:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.634 16:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.634 "name": "Existed_Raid", 00:12:13.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.634 "strip_size_kb": 64, 00:12:13.634 "state": "configuring", 00:12:13.634 "raid_level": "raid0", 00:12:13.634 "superblock": false, 00:12:13.634 "num_base_bdevs": 4, 00:12:13.634 "num_base_bdevs_discovered": 3, 00:12:13.634 "num_base_bdevs_operational": 4, 00:12:13.634 "base_bdevs_list": [ 00:12:13.634 { 00:12:13.634 "name": null, 00:12:13.634 "uuid": "1d42bec9-1209-42c8-97a8-8ba6cc653fc0", 00:12:13.634 "is_configured": false, 00:12:13.634 "data_offset": 0, 00:12:13.634 "data_size": 65536 00:12:13.634 }, 00:12:13.634 { 00:12:13.634 "name": "BaseBdev2", 00:12:13.634 "uuid": "c4d0b1cd-0a02-45e7-8652-512467775f96", 00:12:13.634 "is_configured": true, 00:12:13.634 "data_offset": 0, 00:12:13.634 "data_size": 65536 00:12:13.634 }, 00:12:13.634 { 00:12:13.634 "name": "BaseBdev3", 00:12:13.634 "uuid": "98d1ec69-c59b-4fe2-b520-f10c654d422e", 00:12:13.634 "is_configured": true, 00:12:13.634 "data_offset": 0, 00:12:13.634 "data_size": 65536 00:12:13.634 }, 00:12:13.634 { 00:12:13.634 "name": "BaseBdev4", 00:12:13.634 "uuid": "3ee3b113-37c0-4ff2-b09a-c8214939d929", 00:12:13.634 "is_configured": true, 00:12:13.634 "data_offset": 0, 00:12:13.634 "data_size": 65536 00:12:13.634 } 00:12:13.634 ] 00:12:13.634 }' 00:12:13.634 16:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.634 16:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1d42bec9-1209-42c8-97a8-8ba6cc653fc0 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.204 [2024-11-05 16:26:27.137632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:14.204 [2024-11-05 16:26:27.137701] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:14.204 [2024-11-05 16:26:27.137711] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:14.204 [2024-11-05 16:26:27.138014] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:14.204 [2024-11-05 16:26:27.138192] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:14.204 [2024-11-05 16:26:27.138207] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:14.204 [2024-11-05 16:26:27.138514] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:14.204 NewBaseBdev 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.204 [ 00:12:14.204 { 00:12:14.204 "name": "NewBaseBdev", 00:12:14.204 "aliases": [ 00:12:14.204 "1d42bec9-1209-42c8-97a8-8ba6cc653fc0" 00:12:14.204 ], 00:12:14.204 "product_name": "Malloc disk", 00:12:14.204 "block_size": 512, 00:12:14.204 "num_blocks": 65536, 00:12:14.204 "uuid": "1d42bec9-1209-42c8-97a8-8ba6cc653fc0", 00:12:14.204 "assigned_rate_limits": { 00:12:14.204 "rw_ios_per_sec": 0, 00:12:14.204 "rw_mbytes_per_sec": 0, 00:12:14.204 "r_mbytes_per_sec": 0, 00:12:14.204 "w_mbytes_per_sec": 0 00:12:14.204 }, 00:12:14.204 "claimed": true, 00:12:14.204 "claim_type": "exclusive_write", 00:12:14.204 "zoned": false, 00:12:14.204 "supported_io_types": { 00:12:14.204 "read": true, 00:12:14.204 "write": true, 00:12:14.204 "unmap": true, 00:12:14.204 "flush": true, 00:12:14.204 "reset": true, 00:12:14.204 "nvme_admin": false, 00:12:14.204 "nvme_io": false, 00:12:14.204 "nvme_io_md": false, 00:12:14.204 "write_zeroes": true, 00:12:14.204 "zcopy": true, 00:12:14.204 "get_zone_info": false, 00:12:14.204 "zone_management": false, 00:12:14.204 "zone_append": false, 00:12:14.204 "compare": false, 00:12:14.204 "compare_and_write": false, 00:12:14.204 "abort": true, 00:12:14.204 "seek_hole": false, 00:12:14.204 "seek_data": false, 00:12:14.204 "copy": true, 00:12:14.204 "nvme_iov_md": false 00:12:14.204 }, 00:12:14.204 "memory_domains": [ 00:12:14.204 { 00:12:14.204 "dma_device_id": "system", 00:12:14.204 "dma_device_type": 1 00:12:14.204 }, 00:12:14.204 { 00:12:14.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.204 "dma_device_type": 2 00:12:14.204 } 00:12:14.204 ], 00:12:14.204 "driver_specific": {} 00:12:14.204 } 00:12:14.204 ] 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.204 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.204 "name": "Existed_Raid", 00:12:14.204 "uuid": "a989635a-b22e-4aa7-b901-09df22012b69", 00:12:14.204 "strip_size_kb": 64, 00:12:14.204 "state": "online", 00:12:14.204 "raid_level": "raid0", 00:12:14.204 "superblock": false, 00:12:14.204 "num_base_bdevs": 4, 00:12:14.204 "num_base_bdevs_discovered": 4, 00:12:14.204 "num_base_bdevs_operational": 4, 00:12:14.204 "base_bdevs_list": [ 00:12:14.204 { 00:12:14.204 "name": "NewBaseBdev", 00:12:14.204 "uuid": "1d42bec9-1209-42c8-97a8-8ba6cc653fc0", 00:12:14.204 "is_configured": true, 00:12:14.204 "data_offset": 0, 00:12:14.204 "data_size": 65536 00:12:14.204 }, 00:12:14.205 { 00:12:14.205 "name": "BaseBdev2", 00:12:14.205 "uuid": "c4d0b1cd-0a02-45e7-8652-512467775f96", 00:12:14.205 "is_configured": true, 00:12:14.205 "data_offset": 0, 00:12:14.205 "data_size": 65536 00:12:14.205 }, 00:12:14.205 { 00:12:14.205 "name": "BaseBdev3", 00:12:14.205 "uuid": "98d1ec69-c59b-4fe2-b520-f10c654d422e", 00:12:14.205 "is_configured": true, 00:12:14.205 "data_offset": 0, 00:12:14.205 "data_size": 65536 00:12:14.205 }, 00:12:14.205 { 00:12:14.205 "name": "BaseBdev4", 00:12:14.205 "uuid": "3ee3b113-37c0-4ff2-b09a-c8214939d929", 00:12:14.205 "is_configured": true, 00:12:14.205 "data_offset": 0, 00:12:14.205 "data_size": 65536 00:12:14.205 } 00:12:14.205 ] 00:12:14.205 }' 00:12:14.205 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.205 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.772 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:14.772 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:14.772 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:14.772 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:14.772 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:14.772 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:14.772 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:14.772 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:14.772 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.772 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.772 [2024-11-05 16:26:27.637264] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:14.772 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.772 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:14.772 "name": "Existed_Raid", 00:12:14.772 "aliases": [ 00:12:14.772 "a989635a-b22e-4aa7-b901-09df22012b69" 00:12:14.772 ], 00:12:14.772 "product_name": "Raid Volume", 00:12:14.772 "block_size": 512, 00:12:14.772 "num_blocks": 262144, 00:12:14.772 "uuid": "a989635a-b22e-4aa7-b901-09df22012b69", 00:12:14.772 "assigned_rate_limits": { 00:12:14.772 "rw_ios_per_sec": 0, 00:12:14.772 "rw_mbytes_per_sec": 0, 00:12:14.772 "r_mbytes_per_sec": 0, 00:12:14.772 "w_mbytes_per_sec": 0 00:12:14.772 }, 00:12:14.772 "claimed": false, 00:12:14.772 "zoned": false, 00:12:14.772 "supported_io_types": { 00:12:14.772 "read": true, 00:12:14.772 "write": true, 00:12:14.772 "unmap": true, 00:12:14.772 "flush": true, 00:12:14.772 "reset": true, 00:12:14.772 "nvme_admin": false, 00:12:14.772 "nvme_io": false, 00:12:14.772 "nvme_io_md": false, 00:12:14.772 "write_zeroes": true, 00:12:14.772 "zcopy": false, 00:12:14.772 "get_zone_info": false, 00:12:14.772 "zone_management": false, 00:12:14.772 "zone_append": false, 00:12:14.772 "compare": false, 00:12:14.772 "compare_and_write": false, 00:12:14.772 "abort": false, 00:12:14.772 "seek_hole": false, 00:12:14.772 "seek_data": false, 00:12:14.772 "copy": false, 00:12:14.772 "nvme_iov_md": false 00:12:14.772 }, 00:12:14.772 "memory_domains": [ 00:12:14.772 { 00:12:14.772 "dma_device_id": "system", 00:12:14.772 "dma_device_type": 1 00:12:14.772 }, 00:12:14.772 { 00:12:14.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.773 "dma_device_type": 2 00:12:14.773 }, 00:12:14.773 { 00:12:14.773 "dma_device_id": "system", 00:12:14.773 "dma_device_type": 1 00:12:14.773 }, 00:12:14.773 { 00:12:14.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.773 "dma_device_type": 2 00:12:14.773 }, 00:12:14.773 { 00:12:14.773 "dma_device_id": "system", 00:12:14.773 "dma_device_type": 1 00:12:14.773 }, 00:12:14.773 { 00:12:14.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.773 "dma_device_type": 2 00:12:14.773 }, 00:12:14.773 { 00:12:14.773 "dma_device_id": "system", 00:12:14.773 "dma_device_type": 1 00:12:14.773 }, 00:12:14.773 { 00:12:14.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.773 "dma_device_type": 2 00:12:14.773 } 00:12:14.773 ], 00:12:14.773 "driver_specific": { 00:12:14.773 "raid": { 00:12:14.773 "uuid": "a989635a-b22e-4aa7-b901-09df22012b69", 00:12:14.773 "strip_size_kb": 64, 00:12:14.773 "state": "online", 00:12:14.773 "raid_level": "raid0", 00:12:14.773 "superblock": false, 00:12:14.773 "num_base_bdevs": 4, 00:12:14.773 "num_base_bdevs_discovered": 4, 00:12:14.773 "num_base_bdevs_operational": 4, 00:12:14.773 "base_bdevs_list": [ 00:12:14.773 { 00:12:14.773 "name": "NewBaseBdev", 00:12:14.773 "uuid": "1d42bec9-1209-42c8-97a8-8ba6cc653fc0", 00:12:14.773 "is_configured": true, 00:12:14.773 "data_offset": 0, 00:12:14.773 "data_size": 65536 00:12:14.773 }, 00:12:14.773 { 00:12:14.773 "name": "BaseBdev2", 00:12:14.773 "uuid": "c4d0b1cd-0a02-45e7-8652-512467775f96", 00:12:14.773 "is_configured": true, 00:12:14.773 "data_offset": 0, 00:12:14.773 "data_size": 65536 00:12:14.773 }, 00:12:14.773 { 00:12:14.773 "name": "BaseBdev3", 00:12:14.773 "uuid": "98d1ec69-c59b-4fe2-b520-f10c654d422e", 00:12:14.773 "is_configured": true, 00:12:14.773 "data_offset": 0, 00:12:14.773 "data_size": 65536 00:12:14.773 }, 00:12:14.773 { 00:12:14.773 "name": "BaseBdev4", 00:12:14.773 "uuid": "3ee3b113-37c0-4ff2-b09a-c8214939d929", 00:12:14.773 "is_configured": true, 00:12:14.773 "data_offset": 0, 00:12:14.773 "data_size": 65536 00:12:14.773 } 00:12:14.773 ] 00:12:14.773 } 00:12:14.773 } 00:12:14.773 }' 00:12:14.773 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:14.773 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:14.773 BaseBdev2 00:12:14.773 BaseBdev3 00:12:14.773 BaseBdev4' 00:12:14.773 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.773 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:14.773 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:14.773 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:14.773 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.773 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.773 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.773 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.773 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:14.773 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:14.773 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:14.773 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:14.773 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.773 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.773 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.773 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.773 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:14.773 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:14.773 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:15.032 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.032 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:15.032 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.032 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.032 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.032 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:15.032 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:15.032 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:15.032 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:15.032 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.032 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.032 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.032 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.032 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:15.032 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:15.032 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:15.032 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.032 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.032 [2024-11-05 16:26:27.952369] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:15.032 [2024-11-05 16:26:27.952404] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:15.032 [2024-11-05 16:26:27.952531] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:15.032 [2024-11-05 16:26:27.952624] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:15.032 [2024-11-05 16:26:27.952637] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:15.032 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.032 16:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69660 00:12:15.032 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 69660 ']' 00:12:15.032 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 69660 00:12:15.032 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:12:15.032 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:15.032 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69660 00:12:15.032 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:15.032 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:15.032 killing process with pid 69660 00:12:15.032 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69660' 00:12:15.033 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 69660 00:12:15.033 [2024-11-05 16:26:28.000169] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:15.033 16:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 69660 00:12:15.601 [2024-11-05 16:26:28.444963] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:16.980 16:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:16.980 00:12:16.980 real 0m12.372s 00:12:16.980 user 0m19.620s 00:12:16.980 sys 0m2.068s 00:12:16.980 ************************************ 00:12:16.980 END TEST raid_state_function_test 00:12:16.980 ************************************ 00:12:16.980 16:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:16.980 16:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.980 16:26:29 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:12:16.980 16:26:29 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:16.980 16:26:29 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:16.980 16:26:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:16.980 ************************************ 00:12:16.980 START TEST raid_state_function_test_sb 00:12:16.980 ************************************ 00:12:16.980 16:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 4 true 00:12:16.980 16:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:16.980 16:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:16.980 16:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:16.981 16:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:16.981 16:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:16.981 16:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:16.981 16:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:16.981 16:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:16.981 16:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:16.981 16:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:16.981 16:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:16.981 16:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:16.981 16:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:16.981 16:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:16.981 16:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:16.981 16:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:16.981 16:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:16.981 16:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:16.981 16:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:16.981 16:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:16.981 16:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:16.981 16:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:16.981 16:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:16.981 16:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:16.981 16:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:16.981 16:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:16.981 16:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:16.981 16:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:16.981 16:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:16.981 16:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70340 00:12:16.981 16:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:16.981 16:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70340' 00:12:16.981 Process raid pid: 70340 00:12:16.981 16:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70340 00:12:16.981 16:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 70340 ']' 00:12:16.981 16:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.981 16:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:16.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.981 16:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.981 16:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:16.981 16:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.981 [2024-11-05 16:26:29.849160] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:12:16.981 [2024-11-05 16:26:29.849311] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.981 [2024-11-05 16:26:30.008880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.240 [2024-11-05 16:26:30.130047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.499 [2024-11-05 16:26:30.350773] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:17.499 [2024-11-05 16:26:30.350814] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:17.759 16:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:17.759 16:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:12:17.759 16:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:17.759 16:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.759 16:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.759 [2024-11-05 16:26:30.736889] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:17.759 [2024-11-05 16:26:30.737035] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:17.759 [2024-11-05 16:26:30.737055] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:17.759 [2024-11-05 16:26:30.737068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:17.759 [2024-11-05 16:26:30.737077] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:17.759 [2024-11-05 16:26:30.737088] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:17.759 [2024-11-05 16:26:30.737095] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:17.759 [2024-11-05 16:26:30.737105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:17.759 16:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.759 16:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:17.759 16:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.759 16:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.759 16:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:17.759 16:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:17.759 16:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:17.759 16:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.759 16:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.759 16:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.759 16:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.759 16:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.759 16:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.759 16:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.759 16:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.759 16:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.759 16:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.759 "name": "Existed_Raid", 00:12:17.759 "uuid": "0d169e4e-d4cb-40be-b339-e299f431ecfd", 00:12:17.759 "strip_size_kb": 64, 00:12:17.759 "state": "configuring", 00:12:17.759 "raid_level": "raid0", 00:12:17.759 "superblock": true, 00:12:17.759 "num_base_bdevs": 4, 00:12:17.759 "num_base_bdevs_discovered": 0, 00:12:17.759 "num_base_bdevs_operational": 4, 00:12:17.759 "base_bdevs_list": [ 00:12:17.759 { 00:12:17.759 "name": "BaseBdev1", 00:12:17.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.759 "is_configured": false, 00:12:17.759 "data_offset": 0, 00:12:17.759 "data_size": 0 00:12:17.759 }, 00:12:17.759 { 00:12:17.759 "name": "BaseBdev2", 00:12:17.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.759 "is_configured": false, 00:12:17.759 "data_offset": 0, 00:12:17.759 "data_size": 0 00:12:17.759 }, 00:12:17.759 { 00:12:17.759 "name": "BaseBdev3", 00:12:17.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.759 "is_configured": false, 00:12:17.759 "data_offset": 0, 00:12:17.759 "data_size": 0 00:12:17.759 }, 00:12:17.759 { 00:12:17.759 "name": "BaseBdev4", 00:12:17.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.759 "is_configured": false, 00:12:17.759 "data_offset": 0, 00:12:17.759 "data_size": 0 00:12:17.759 } 00:12:17.759 ] 00:12:17.759 }' 00:12:17.759 16:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.759 16:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.327 16:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:18.327 16:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.327 16:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.327 [2024-11-05 16:26:31.212386] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:18.327 [2024-11-05 16:26:31.212528] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:18.327 16:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.327 16:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:18.327 16:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.327 16:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.327 [2024-11-05 16:26:31.224583] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:18.327 [2024-11-05 16:26:31.224681] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:18.327 [2024-11-05 16:26:31.224715] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:18.327 [2024-11-05 16:26:31.224759] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:18.327 [2024-11-05 16:26:31.224790] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:18.327 [2024-11-05 16:26:31.224818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:18.327 [2024-11-05 16:26:31.224859] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:18.327 [2024-11-05 16:26:31.224894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:18.327 16:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.327 16:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:18.327 16:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.327 16:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.327 [2024-11-05 16:26:31.275580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:18.327 BaseBdev1 00:12:18.327 16:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.327 16:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:18.327 16:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:18.327 16:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:18.327 16:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:18.327 16:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:18.327 16:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:18.327 16:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:18.327 16:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.327 16:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.327 16:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.327 16:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:18.327 16:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.327 16:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.327 [ 00:12:18.327 { 00:12:18.327 "name": "BaseBdev1", 00:12:18.327 "aliases": [ 00:12:18.327 "f4c79a86-7308-4590-ab79-5c67198beb74" 00:12:18.327 ], 00:12:18.327 "product_name": "Malloc disk", 00:12:18.327 "block_size": 512, 00:12:18.327 "num_blocks": 65536, 00:12:18.327 "uuid": "f4c79a86-7308-4590-ab79-5c67198beb74", 00:12:18.327 "assigned_rate_limits": { 00:12:18.327 "rw_ios_per_sec": 0, 00:12:18.327 "rw_mbytes_per_sec": 0, 00:12:18.327 "r_mbytes_per_sec": 0, 00:12:18.327 "w_mbytes_per_sec": 0 00:12:18.327 }, 00:12:18.327 "claimed": true, 00:12:18.327 "claim_type": "exclusive_write", 00:12:18.327 "zoned": false, 00:12:18.327 "supported_io_types": { 00:12:18.327 "read": true, 00:12:18.327 "write": true, 00:12:18.327 "unmap": true, 00:12:18.327 "flush": true, 00:12:18.327 "reset": true, 00:12:18.327 "nvme_admin": false, 00:12:18.327 "nvme_io": false, 00:12:18.327 "nvme_io_md": false, 00:12:18.327 "write_zeroes": true, 00:12:18.327 "zcopy": true, 00:12:18.327 "get_zone_info": false, 00:12:18.327 "zone_management": false, 00:12:18.327 "zone_append": false, 00:12:18.327 "compare": false, 00:12:18.327 "compare_and_write": false, 00:12:18.327 "abort": true, 00:12:18.327 "seek_hole": false, 00:12:18.327 "seek_data": false, 00:12:18.327 "copy": true, 00:12:18.327 "nvme_iov_md": false 00:12:18.327 }, 00:12:18.327 "memory_domains": [ 00:12:18.327 { 00:12:18.328 "dma_device_id": "system", 00:12:18.328 "dma_device_type": 1 00:12:18.328 }, 00:12:18.328 { 00:12:18.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.328 "dma_device_type": 2 00:12:18.328 } 00:12:18.328 ], 00:12:18.328 "driver_specific": {} 00:12:18.328 } 00:12:18.328 ] 00:12:18.328 16:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.328 16:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:18.328 16:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:18.328 16:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.328 16:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.328 16:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:18.328 16:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:18.328 16:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.328 16:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.328 16:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.328 16:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.328 16:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.328 16:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.328 16:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.328 16:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.328 16:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.328 16:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.328 16:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.328 "name": "Existed_Raid", 00:12:18.328 "uuid": "4e050bb2-6238-4eb7-a085-cf31fb4f1fbb", 00:12:18.328 "strip_size_kb": 64, 00:12:18.328 "state": "configuring", 00:12:18.328 "raid_level": "raid0", 00:12:18.328 "superblock": true, 00:12:18.328 "num_base_bdevs": 4, 00:12:18.328 "num_base_bdevs_discovered": 1, 00:12:18.328 "num_base_bdevs_operational": 4, 00:12:18.328 "base_bdevs_list": [ 00:12:18.328 { 00:12:18.328 "name": "BaseBdev1", 00:12:18.328 "uuid": "f4c79a86-7308-4590-ab79-5c67198beb74", 00:12:18.328 "is_configured": true, 00:12:18.328 "data_offset": 2048, 00:12:18.328 "data_size": 63488 00:12:18.328 }, 00:12:18.328 { 00:12:18.328 "name": "BaseBdev2", 00:12:18.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.328 "is_configured": false, 00:12:18.328 "data_offset": 0, 00:12:18.328 "data_size": 0 00:12:18.328 }, 00:12:18.328 { 00:12:18.328 "name": "BaseBdev3", 00:12:18.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.328 "is_configured": false, 00:12:18.328 "data_offset": 0, 00:12:18.328 "data_size": 0 00:12:18.328 }, 00:12:18.328 { 00:12:18.328 "name": "BaseBdev4", 00:12:18.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.328 "is_configured": false, 00:12:18.328 "data_offset": 0, 00:12:18.328 "data_size": 0 00:12:18.328 } 00:12:18.328 ] 00:12:18.328 }' 00:12:18.328 16:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.328 16:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.897 16:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:18.897 16:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.897 16:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.897 [2024-11-05 16:26:31.730873] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:18.897 [2024-11-05 16:26:31.730939] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:18.897 16:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.897 16:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:18.897 16:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.897 16:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.897 [2024-11-05 16:26:31.742948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:18.897 [2024-11-05 16:26:31.745115] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:18.897 [2024-11-05 16:26:31.745249] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:18.897 [2024-11-05 16:26:31.745266] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:18.897 [2024-11-05 16:26:31.745279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:18.897 [2024-11-05 16:26:31.745287] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:18.897 [2024-11-05 16:26:31.745297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:18.897 16:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.897 16:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:18.897 16:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:18.897 16:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:18.897 16:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.897 16:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.897 16:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:18.897 16:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:18.897 16:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.897 16:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.897 16:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.897 16:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.897 16:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.897 16:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.897 16:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.897 16:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.897 16:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.897 16:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.897 16:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.897 "name": "Existed_Raid", 00:12:18.897 "uuid": "62b57db3-f483-4534-8e36-c83c2af4c7e7", 00:12:18.897 "strip_size_kb": 64, 00:12:18.897 "state": "configuring", 00:12:18.897 "raid_level": "raid0", 00:12:18.897 "superblock": true, 00:12:18.897 "num_base_bdevs": 4, 00:12:18.897 "num_base_bdevs_discovered": 1, 00:12:18.897 "num_base_bdevs_operational": 4, 00:12:18.897 "base_bdevs_list": [ 00:12:18.897 { 00:12:18.897 "name": "BaseBdev1", 00:12:18.897 "uuid": "f4c79a86-7308-4590-ab79-5c67198beb74", 00:12:18.897 "is_configured": true, 00:12:18.897 "data_offset": 2048, 00:12:18.897 "data_size": 63488 00:12:18.897 }, 00:12:18.897 { 00:12:18.897 "name": "BaseBdev2", 00:12:18.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.897 "is_configured": false, 00:12:18.897 "data_offset": 0, 00:12:18.897 "data_size": 0 00:12:18.897 }, 00:12:18.897 { 00:12:18.897 "name": "BaseBdev3", 00:12:18.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.897 "is_configured": false, 00:12:18.897 "data_offset": 0, 00:12:18.897 "data_size": 0 00:12:18.897 }, 00:12:18.897 { 00:12:18.897 "name": "BaseBdev4", 00:12:18.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.897 "is_configured": false, 00:12:18.897 "data_offset": 0, 00:12:18.897 "data_size": 0 00:12:18.897 } 00:12:18.897 ] 00:12:18.897 }' 00:12:18.897 16:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.897 16:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.156 16:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:19.156 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.156 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.415 [2024-11-05 16:26:32.265357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:19.415 BaseBdev2 00:12:19.415 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.415 16:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:19.415 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:19.415 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:19.415 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:19.415 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:19.415 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:19.416 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:19.416 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.416 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.416 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.416 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:19.416 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.416 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.416 [ 00:12:19.416 { 00:12:19.416 "name": "BaseBdev2", 00:12:19.416 "aliases": [ 00:12:19.416 "15f377b8-a30d-483f-adac-a01f81def033" 00:12:19.416 ], 00:12:19.416 "product_name": "Malloc disk", 00:12:19.416 "block_size": 512, 00:12:19.416 "num_blocks": 65536, 00:12:19.416 "uuid": "15f377b8-a30d-483f-adac-a01f81def033", 00:12:19.416 "assigned_rate_limits": { 00:12:19.416 "rw_ios_per_sec": 0, 00:12:19.416 "rw_mbytes_per_sec": 0, 00:12:19.416 "r_mbytes_per_sec": 0, 00:12:19.416 "w_mbytes_per_sec": 0 00:12:19.416 }, 00:12:19.416 "claimed": true, 00:12:19.416 "claim_type": "exclusive_write", 00:12:19.416 "zoned": false, 00:12:19.416 "supported_io_types": { 00:12:19.416 "read": true, 00:12:19.416 "write": true, 00:12:19.416 "unmap": true, 00:12:19.416 "flush": true, 00:12:19.416 "reset": true, 00:12:19.416 "nvme_admin": false, 00:12:19.416 "nvme_io": false, 00:12:19.416 "nvme_io_md": false, 00:12:19.416 "write_zeroes": true, 00:12:19.416 "zcopy": true, 00:12:19.416 "get_zone_info": false, 00:12:19.416 "zone_management": false, 00:12:19.416 "zone_append": false, 00:12:19.416 "compare": false, 00:12:19.416 "compare_and_write": false, 00:12:19.416 "abort": true, 00:12:19.416 "seek_hole": false, 00:12:19.416 "seek_data": false, 00:12:19.416 "copy": true, 00:12:19.416 "nvme_iov_md": false 00:12:19.416 }, 00:12:19.416 "memory_domains": [ 00:12:19.416 { 00:12:19.416 "dma_device_id": "system", 00:12:19.416 "dma_device_type": 1 00:12:19.416 }, 00:12:19.416 { 00:12:19.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.416 "dma_device_type": 2 00:12:19.416 } 00:12:19.416 ], 00:12:19.416 "driver_specific": {} 00:12:19.416 } 00:12:19.416 ] 00:12:19.416 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.416 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:19.416 16:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:19.416 16:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:19.416 16:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:19.416 16:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.416 16:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.416 16:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:19.416 16:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:19.416 16:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.416 16:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.416 16:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.416 16:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.416 16:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.416 16:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.416 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.416 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.416 16:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.416 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.416 16:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.416 "name": "Existed_Raid", 00:12:19.416 "uuid": "62b57db3-f483-4534-8e36-c83c2af4c7e7", 00:12:19.416 "strip_size_kb": 64, 00:12:19.416 "state": "configuring", 00:12:19.416 "raid_level": "raid0", 00:12:19.416 "superblock": true, 00:12:19.416 "num_base_bdevs": 4, 00:12:19.416 "num_base_bdevs_discovered": 2, 00:12:19.416 "num_base_bdevs_operational": 4, 00:12:19.416 "base_bdevs_list": [ 00:12:19.416 { 00:12:19.416 "name": "BaseBdev1", 00:12:19.416 "uuid": "f4c79a86-7308-4590-ab79-5c67198beb74", 00:12:19.416 "is_configured": true, 00:12:19.416 "data_offset": 2048, 00:12:19.416 "data_size": 63488 00:12:19.416 }, 00:12:19.416 { 00:12:19.416 "name": "BaseBdev2", 00:12:19.416 "uuid": "15f377b8-a30d-483f-adac-a01f81def033", 00:12:19.416 "is_configured": true, 00:12:19.416 "data_offset": 2048, 00:12:19.416 "data_size": 63488 00:12:19.416 }, 00:12:19.416 { 00:12:19.416 "name": "BaseBdev3", 00:12:19.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.416 "is_configured": false, 00:12:19.416 "data_offset": 0, 00:12:19.416 "data_size": 0 00:12:19.416 }, 00:12:19.416 { 00:12:19.416 "name": "BaseBdev4", 00:12:19.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.416 "is_configured": false, 00:12:19.416 "data_offset": 0, 00:12:19.416 "data_size": 0 00:12:19.416 } 00:12:19.416 ] 00:12:19.416 }' 00:12:19.416 16:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.416 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.985 16:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:19.985 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.985 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.985 [2024-11-05 16:26:32.837185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:19.985 BaseBdev3 00:12:19.985 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.985 16:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:19.985 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:19.985 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:19.985 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:19.985 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:19.985 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:19.985 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:19.985 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.985 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.985 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.985 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:19.985 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.985 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.985 [ 00:12:19.985 { 00:12:19.986 "name": "BaseBdev3", 00:12:19.986 "aliases": [ 00:12:19.986 "3525e7c5-b4e3-4a1a-a891-aa8ca4ec44be" 00:12:19.986 ], 00:12:19.986 "product_name": "Malloc disk", 00:12:19.986 "block_size": 512, 00:12:19.986 "num_blocks": 65536, 00:12:19.986 "uuid": "3525e7c5-b4e3-4a1a-a891-aa8ca4ec44be", 00:12:19.986 "assigned_rate_limits": { 00:12:19.986 "rw_ios_per_sec": 0, 00:12:19.986 "rw_mbytes_per_sec": 0, 00:12:19.986 "r_mbytes_per_sec": 0, 00:12:19.986 "w_mbytes_per_sec": 0 00:12:19.986 }, 00:12:19.986 "claimed": true, 00:12:19.986 "claim_type": "exclusive_write", 00:12:19.986 "zoned": false, 00:12:19.986 "supported_io_types": { 00:12:19.986 "read": true, 00:12:19.986 "write": true, 00:12:19.986 "unmap": true, 00:12:19.986 "flush": true, 00:12:19.986 "reset": true, 00:12:19.986 "nvme_admin": false, 00:12:19.986 "nvme_io": false, 00:12:19.986 "nvme_io_md": false, 00:12:19.986 "write_zeroes": true, 00:12:19.986 "zcopy": true, 00:12:19.986 "get_zone_info": false, 00:12:19.986 "zone_management": false, 00:12:19.986 "zone_append": false, 00:12:19.986 "compare": false, 00:12:19.986 "compare_and_write": false, 00:12:19.986 "abort": true, 00:12:19.986 "seek_hole": false, 00:12:19.986 "seek_data": false, 00:12:19.986 "copy": true, 00:12:19.986 "nvme_iov_md": false 00:12:19.986 }, 00:12:19.986 "memory_domains": [ 00:12:19.986 { 00:12:19.986 "dma_device_id": "system", 00:12:19.986 "dma_device_type": 1 00:12:19.986 }, 00:12:19.986 { 00:12:19.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.986 "dma_device_type": 2 00:12:19.986 } 00:12:19.986 ], 00:12:19.986 "driver_specific": {} 00:12:19.986 } 00:12:19.986 ] 00:12:19.986 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.986 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:19.986 16:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:19.986 16:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:19.986 16:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:19.986 16:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.986 16:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.986 16:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:19.986 16:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:19.986 16:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.986 16:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.986 16:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.986 16:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.986 16:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.986 16:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.986 16:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.986 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.986 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.986 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.986 16:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.986 "name": "Existed_Raid", 00:12:19.986 "uuid": "62b57db3-f483-4534-8e36-c83c2af4c7e7", 00:12:19.986 "strip_size_kb": 64, 00:12:19.986 "state": "configuring", 00:12:19.986 "raid_level": "raid0", 00:12:19.986 "superblock": true, 00:12:19.986 "num_base_bdevs": 4, 00:12:19.986 "num_base_bdevs_discovered": 3, 00:12:19.986 "num_base_bdevs_operational": 4, 00:12:19.986 "base_bdevs_list": [ 00:12:19.986 { 00:12:19.986 "name": "BaseBdev1", 00:12:19.986 "uuid": "f4c79a86-7308-4590-ab79-5c67198beb74", 00:12:19.986 "is_configured": true, 00:12:19.986 "data_offset": 2048, 00:12:19.986 "data_size": 63488 00:12:19.986 }, 00:12:19.986 { 00:12:19.986 "name": "BaseBdev2", 00:12:19.986 "uuid": "15f377b8-a30d-483f-adac-a01f81def033", 00:12:19.986 "is_configured": true, 00:12:19.986 "data_offset": 2048, 00:12:19.986 "data_size": 63488 00:12:19.986 }, 00:12:19.986 { 00:12:19.986 "name": "BaseBdev3", 00:12:19.986 "uuid": "3525e7c5-b4e3-4a1a-a891-aa8ca4ec44be", 00:12:19.986 "is_configured": true, 00:12:19.986 "data_offset": 2048, 00:12:19.986 "data_size": 63488 00:12:19.986 }, 00:12:19.986 { 00:12:19.986 "name": "BaseBdev4", 00:12:19.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.986 "is_configured": false, 00:12:19.986 "data_offset": 0, 00:12:19.986 "data_size": 0 00:12:19.986 } 00:12:19.986 ] 00:12:19.986 }' 00:12:19.986 16:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.986 16:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.246 16:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:20.246 16:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.246 16:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.505 [2024-11-05 16:26:33.364137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:20.505 [2024-11-05 16:26:33.364426] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:20.505 [2024-11-05 16:26:33.364443] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:20.505 [2024-11-05 16:26:33.364777] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:20.505 [2024-11-05 16:26:33.364963] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:20.505 [2024-11-05 16:26:33.364985] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:20.505 [2024-11-05 16:26:33.365146] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:20.505 BaseBdev4 00:12:20.505 16:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.505 16:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:20.505 16:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:20.505 16:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:20.505 16:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:20.505 16:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:20.505 16:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:20.505 16:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:20.505 16:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.505 16:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.505 16:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.505 16:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:20.505 16:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.505 16:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.505 [ 00:12:20.505 { 00:12:20.505 "name": "BaseBdev4", 00:12:20.505 "aliases": [ 00:12:20.505 "a46aaa92-f218-40d0-a5b5-7fde28bf125b" 00:12:20.505 ], 00:12:20.505 "product_name": "Malloc disk", 00:12:20.505 "block_size": 512, 00:12:20.505 "num_blocks": 65536, 00:12:20.505 "uuid": "a46aaa92-f218-40d0-a5b5-7fde28bf125b", 00:12:20.505 "assigned_rate_limits": { 00:12:20.505 "rw_ios_per_sec": 0, 00:12:20.505 "rw_mbytes_per_sec": 0, 00:12:20.505 "r_mbytes_per_sec": 0, 00:12:20.505 "w_mbytes_per_sec": 0 00:12:20.505 }, 00:12:20.505 "claimed": true, 00:12:20.505 "claim_type": "exclusive_write", 00:12:20.505 "zoned": false, 00:12:20.505 "supported_io_types": { 00:12:20.505 "read": true, 00:12:20.505 "write": true, 00:12:20.505 "unmap": true, 00:12:20.505 "flush": true, 00:12:20.505 "reset": true, 00:12:20.505 "nvme_admin": false, 00:12:20.505 "nvme_io": false, 00:12:20.505 "nvme_io_md": false, 00:12:20.505 "write_zeroes": true, 00:12:20.505 "zcopy": true, 00:12:20.505 "get_zone_info": false, 00:12:20.505 "zone_management": false, 00:12:20.505 "zone_append": false, 00:12:20.505 "compare": false, 00:12:20.505 "compare_and_write": false, 00:12:20.505 "abort": true, 00:12:20.505 "seek_hole": false, 00:12:20.505 "seek_data": false, 00:12:20.505 "copy": true, 00:12:20.505 "nvme_iov_md": false 00:12:20.505 }, 00:12:20.505 "memory_domains": [ 00:12:20.505 { 00:12:20.505 "dma_device_id": "system", 00:12:20.505 "dma_device_type": 1 00:12:20.505 }, 00:12:20.505 { 00:12:20.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.505 "dma_device_type": 2 00:12:20.505 } 00:12:20.505 ], 00:12:20.505 "driver_specific": {} 00:12:20.505 } 00:12:20.505 ] 00:12:20.505 16:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.505 16:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:20.505 16:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:20.505 16:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:20.505 16:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:20.506 16:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.506 16:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:20.506 16:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:20.506 16:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:20.506 16:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.506 16:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.506 16:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.506 16:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.506 16:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.506 16:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.506 16:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.506 16:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.506 16:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.506 16:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.506 16:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.506 "name": "Existed_Raid", 00:12:20.506 "uuid": "62b57db3-f483-4534-8e36-c83c2af4c7e7", 00:12:20.506 "strip_size_kb": 64, 00:12:20.506 "state": "online", 00:12:20.506 "raid_level": "raid0", 00:12:20.506 "superblock": true, 00:12:20.506 "num_base_bdevs": 4, 00:12:20.506 "num_base_bdevs_discovered": 4, 00:12:20.506 "num_base_bdevs_operational": 4, 00:12:20.506 "base_bdevs_list": [ 00:12:20.506 { 00:12:20.506 "name": "BaseBdev1", 00:12:20.506 "uuid": "f4c79a86-7308-4590-ab79-5c67198beb74", 00:12:20.506 "is_configured": true, 00:12:20.506 "data_offset": 2048, 00:12:20.506 "data_size": 63488 00:12:20.506 }, 00:12:20.506 { 00:12:20.506 "name": "BaseBdev2", 00:12:20.506 "uuid": "15f377b8-a30d-483f-adac-a01f81def033", 00:12:20.506 "is_configured": true, 00:12:20.506 "data_offset": 2048, 00:12:20.506 "data_size": 63488 00:12:20.506 }, 00:12:20.506 { 00:12:20.506 "name": "BaseBdev3", 00:12:20.506 "uuid": "3525e7c5-b4e3-4a1a-a891-aa8ca4ec44be", 00:12:20.506 "is_configured": true, 00:12:20.506 "data_offset": 2048, 00:12:20.506 "data_size": 63488 00:12:20.506 }, 00:12:20.506 { 00:12:20.506 "name": "BaseBdev4", 00:12:20.506 "uuid": "a46aaa92-f218-40d0-a5b5-7fde28bf125b", 00:12:20.506 "is_configured": true, 00:12:20.506 "data_offset": 2048, 00:12:20.506 "data_size": 63488 00:12:20.506 } 00:12:20.506 ] 00:12:20.506 }' 00:12:20.506 16:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.506 16:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.075 16:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:21.075 16:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:21.075 16:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:21.075 16:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:21.075 16:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:21.075 16:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:21.075 16:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:21.075 16:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:21.075 16:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.075 16:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.075 [2024-11-05 16:26:33.903709] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:21.075 16:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.075 16:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:21.075 "name": "Existed_Raid", 00:12:21.075 "aliases": [ 00:12:21.075 "62b57db3-f483-4534-8e36-c83c2af4c7e7" 00:12:21.075 ], 00:12:21.075 "product_name": "Raid Volume", 00:12:21.075 "block_size": 512, 00:12:21.075 "num_blocks": 253952, 00:12:21.075 "uuid": "62b57db3-f483-4534-8e36-c83c2af4c7e7", 00:12:21.075 "assigned_rate_limits": { 00:12:21.075 "rw_ios_per_sec": 0, 00:12:21.075 "rw_mbytes_per_sec": 0, 00:12:21.075 "r_mbytes_per_sec": 0, 00:12:21.075 "w_mbytes_per_sec": 0 00:12:21.075 }, 00:12:21.075 "claimed": false, 00:12:21.075 "zoned": false, 00:12:21.075 "supported_io_types": { 00:12:21.075 "read": true, 00:12:21.075 "write": true, 00:12:21.075 "unmap": true, 00:12:21.075 "flush": true, 00:12:21.075 "reset": true, 00:12:21.075 "nvme_admin": false, 00:12:21.075 "nvme_io": false, 00:12:21.075 "nvme_io_md": false, 00:12:21.075 "write_zeroes": true, 00:12:21.075 "zcopy": false, 00:12:21.075 "get_zone_info": false, 00:12:21.075 "zone_management": false, 00:12:21.075 "zone_append": false, 00:12:21.075 "compare": false, 00:12:21.075 "compare_and_write": false, 00:12:21.075 "abort": false, 00:12:21.075 "seek_hole": false, 00:12:21.075 "seek_data": false, 00:12:21.075 "copy": false, 00:12:21.075 "nvme_iov_md": false 00:12:21.075 }, 00:12:21.075 "memory_domains": [ 00:12:21.075 { 00:12:21.075 "dma_device_id": "system", 00:12:21.075 "dma_device_type": 1 00:12:21.075 }, 00:12:21.075 { 00:12:21.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.075 "dma_device_type": 2 00:12:21.075 }, 00:12:21.075 { 00:12:21.075 "dma_device_id": "system", 00:12:21.075 "dma_device_type": 1 00:12:21.075 }, 00:12:21.075 { 00:12:21.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.075 "dma_device_type": 2 00:12:21.075 }, 00:12:21.075 { 00:12:21.075 "dma_device_id": "system", 00:12:21.075 "dma_device_type": 1 00:12:21.075 }, 00:12:21.075 { 00:12:21.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.075 "dma_device_type": 2 00:12:21.075 }, 00:12:21.075 { 00:12:21.075 "dma_device_id": "system", 00:12:21.075 "dma_device_type": 1 00:12:21.075 }, 00:12:21.075 { 00:12:21.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.075 "dma_device_type": 2 00:12:21.075 } 00:12:21.075 ], 00:12:21.075 "driver_specific": { 00:12:21.075 "raid": { 00:12:21.075 "uuid": "62b57db3-f483-4534-8e36-c83c2af4c7e7", 00:12:21.075 "strip_size_kb": 64, 00:12:21.075 "state": "online", 00:12:21.075 "raid_level": "raid0", 00:12:21.075 "superblock": true, 00:12:21.075 "num_base_bdevs": 4, 00:12:21.075 "num_base_bdevs_discovered": 4, 00:12:21.075 "num_base_bdevs_operational": 4, 00:12:21.075 "base_bdevs_list": [ 00:12:21.075 { 00:12:21.075 "name": "BaseBdev1", 00:12:21.075 "uuid": "f4c79a86-7308-4590-ab79-5c67198beb74", 00:12:21.075 "is_configured": true, 00:12:21.075 "data_offset": 2048, 00:12:21.075 "data_size": 63488 00:12:21.075 }, 00:12:21.075 { 00:12:21.075 "name": "BaseBdev2", 00:12:21.075 "uuid": "15f377b8-a30d-483f-adac-a01f81def033", 00:12:21.075 "is_configured": true, 00:12:21.075 "data_offset": 2048, 00:12:21.075 "data_size": 63488 00:12:21.075 }, 00:12:21.075 { 00:12:21.075 "name": "BaseBdev3", 00:12:21.075 "uuid": "3525e7c5-b4e3-4a1a-a891-aa8ca4ec44be", 00:12:21.075 "is_configured": true, 00:12:21.075 "data_offset": 2048, 00:12:21.075 "data_size": 63488 00:12:21.076 }, 00:12:21.076 { 00:12:21.076 "name": "BaseBdev4", 00:12:21.076 "uuid": "a46aaa92-f218-40d0-a5b5-7fde28bf125b", 00:12:21.076 "is_configured": true, 00:12:21.076 "data_offset": 2048, 00:12:21.076 "data_size": 63488 00:12:21.076 } 00:12:21.076 ] 00:12:21.076 } 00:12:21.076 } 00:12:21.076 }' 00:12:21.076 16:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:21.076 16:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:21.076 BaseBdev2 00:12:21.076 BaseBdev3 00:12:21.076 BaseBdev4' 00:12:21.076 16:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.076 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:21.076 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:21.076 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:21.076 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.076 16:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.076 16:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.076 16:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.076 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:21.076 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:21.076 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:21.076 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:21.076 16:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.076 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.076 16:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.076 16:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.076 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:21.076 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:21.076 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:21.076 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.076 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:21.076 16:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.076 16:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.076 16:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.336 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:21.336 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:21.336 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:21.336 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:21.336 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.336 16:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.336 16:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.336 16:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.336 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:21.336 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:21.336 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:21.336 16:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.336 16:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.336 [2024-11-05 16:26:34.230849] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:21.336 [2024-11-05 16:26:34.230888] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:21.336 [2024-11-05 16:26:34.230943] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:21.336 16:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.336 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:21.336 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:21.336 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:21.336 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:21.336 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:21.336 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:12:21.336 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.336 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:21.336 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:21.336 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:21.336 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:21.336 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.336 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.336 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.336 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.336 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.336 16:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.336 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.336 16:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.336 16:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.336 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.336 "name": "Existed_Raid", 00:12:21.336 "uuid": "62b57db3-f483-4534-8e36-c83c2af4c7e7", 00:12:21.336 "strip_size_kb": 64, 00:12:21.336 "state": "offline", 00:12:21.336 "raid_level": "raid0", 00:12:21.336 "superblock": true, 00:12:21.336 "num_base_bdevs": 4, 00:12:21.336 "num_base_bdevs_discovered": 3, 00:12:21.336 "num_base_bdevs_operational": 3, 00:12:21.336 "base_bdevs_list": [ 00:12:21.336 { 00:12:21.336 "name": null, 00:12:21.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.336 "is_configured": false, 00:12:21.336 "data_offset": 0, 00:12:21.336 "data_size": 63488 00:12:21.336 }, 00:12:21.336 { 00:12:21.336 "name": "BaseBdev2", 00:12:21.336 "uuid": "15f377b8-a30d-483f-adac-a01f81def033", 00:12:21.336 "is_configured": true, 00:12:21.336 "data_offset": 2048, 00:12:21.336 "data_size": 63488 00:12:21.336 }, 00:12:21.336 { 00:12:21.336 "name": "BaseBdev3", 00:12:21.336 "uuid": "3525e7c5-b4e3-4a1a-a891-aa8ca4ec44be", 00:12:21.336 "is_configured": true, 00:12:21.336 "data_offset": 2048, 00:12:21.336 "data_size": 63488 00:12:21.336 }, 00:12:21.336 { 00:12:21.336 "name": "BaseBdev4", 00:12:21.336 "uuid": "a46aaa92-f218-40d0-a5b5-7fde28bf125b", 00:12:21.336 "is_configured": true, 00:12:21.336 "data_offset": 2048, 00:12:21.336 "data_size": 63488 00:12:21.336 } 00:12:21.336 ] 00:12:21.336 }' 00:12:21.336 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.336 16:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.907 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:21.907 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:21.907 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.907 16:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.907 16:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.907 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:21.907 16:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.907 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:21.907 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:21.907 16:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:21.907 16:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.907 16:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.907 [2024-11-05 16:26:34.897865] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:22.167 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.167 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:22.167 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:22.167 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.167 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.167 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:22.167 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.167 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.167 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:22.167 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:22.167 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:22.167 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.167 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.167 [2024-11-05 16:26:35.074643] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:22.167 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.167 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:22.167 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:22.167 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.167 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:22.167 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.167 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.167 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.167 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:22.167 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:22.167 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:22.167 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.167 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.167 [2024-11-05 16:26:35.243565] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:22.167 [2024-11-05 16:26:35.243625] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:22.426 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.426 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:22.426 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:22.426 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.426 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:22.426 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.426 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.426 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.426 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:22.426 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:22.426 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:22.426 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:22.426 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:22.426 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:22.426 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.426 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.426 BaseBdev2 00:12:22.426 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.426 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:22.426 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:22.426 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:22.426 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:22.426 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:22.426 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:22.426 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:22.426 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.426 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.426 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.426 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:22.426 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.426 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.426 [ 00:12:22.426 { 00:12:22.426 "name": "BaseBdev2", 00:12:22.426 "aliases": [ 00:12:22.426 "6709559b-321c-4fd2-9cc8-6f6981b8ffc2" 00:12:22.426 ], 00:12:22.426 "product_name": "Malloc disk", 00:12:22.426 "block_size": 512, 00:12:22.426 "num_blocks": 65536, 00:12:22.426 "uuid": "6709559b-321c-4fd2-9cc8-6f6981b8ffc2", 00:12:22.426 "assigned_rate_limits": { 00:12:22.426 "rw_ios_per_sec": 0, 00:12:22.426 "rw_mbytes_per_sec": 0, 00:12:22.426 "r_mbytes_per_sec": 0, 00:12:22.426 "w_mbytes_per_sec": 0 00:12:22.426 }, 00:12:22.426 "claimed": false, 00:12:22.426 "zoned": false, 00:12:22.426 "supported_io_types": { 00:12:22.426 "read": true, 00:12:22.426 "write": true, 00:12:22.426 "unmap": true, 00:12:22.426 "flush": true, 00:12:22.426 "reset": true, 00:12:22.426 "nvme_admin": false, 00:12:22.426 "nvme_io": false, 00:12:22.426 "nvme_io_md": false, 00:12:22.426 "write_zeroes": true, 00:12:22.426 "zcopy": true, 00:12:22.426 "get_zone_info": false, 00:12:22.426 "zone_management": false, 00:12:22.426 "zone_append": false, 00:12:22.426 "compare": false, 00:12:22.426 "compare_and_write": false, 00:12:22.426 "abort": true, 00:12:22.426 "seek_hole": false, 00:12:22.426 "seek_data": false, 00:12:22.426 "copy": true, 00:12:22.426 "nvme_iov_md": false 00:12:22.426 }, 00:12:22.426 "memory_domains": [ 00:12:22.426 { 00:12:22.426 "dma_device_id": "system", 00:12:22.426 "dma_device_type": 1 00:12:22.426 }, 00:12:22.426 { 00:12:22.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.426 "dma_device_type": 2 00:12:22.426 } 00:12:22.426 ], 00:12:22.426 "driver_specific": {} 00:12:22.426 } 00:12:22.426 ] 00:12:22.426 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.426 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:22.426 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:22.426 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:22.426 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:22.426 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.427 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.685 BaseBdev3 00:12:22.685 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.685 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:22.685 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:22.685 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:22.685 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:22.685 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:22.685 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:22.685 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:22.685 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.685 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.685 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.685 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:22.685 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.685 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.685 [ 00:12:22.685 { 00:12:22.685 "name": "BaseBdev3", 00:12:22.685 "aliases": [ 00:12:22.685 "ca1e033d-c61f-4792-903c-cc7b71abdf83" 00:12:22.686 ], 00:12:22.686 "product_name": "Malloc disk", 00:12:22.686 "block_size": 512, 00:12:22.686 "num_blocks": 65536, 00:12:22.686 "uuid": "ca1e033d-c61f-4792-903c-cc7b71abdf83", 00:12:22.686 "assigned_rate_limits": { 00:12:22.686 "rw_ios_per_sec": 0, 00:12:22.686 "rw_mbytes_per_sec": 0, 00:12:22.686 "r_mbytes_per_sec": 0, 00:12:22.686 "w_mbytes_per_sec": 0 00:12:22.686 }, 00:12:22.686 "claimed": false, 00:12:22.686 "zoned": false, 00:12:22.686 "supported_io_types": { 00:12:22.686 "read": true, 00:12:22.686 "write": true, 00:12:22.686 "unmap": true, 00:12:22.686 "flush": true, 00:12:22.686 "reset": true, 00:12:22.686 "nvme_admin": false, 00:12:22.686 "nvme_io": false, 00:12:22.686 "nvme_io_md": false, 00:12:22.686 "write_zeroes": true, 00:12:22.686 "zcopy": true, 00:12:22.686 "get_zone_info": false, 00:12:22.686 "zone_management": false, 00:12:22.686 "zone_append": false, 00:12:22.686 "compare": false, 00:12:22.686 "compare_and_write": false, 00:12:22.686 "abort": true, 00:12:22.686 "seek_hole": false, 00:12:22.686 "seek_data": false, 00:12:22.686 "copy": true, 00:12:22.686 "nvme_iov_md": false 00:12:22.686 }, 00:12:22.686 "memory_domains": [ 00:12:22.686 { 00:12:22.686 "dma_device_id": "system", 00:12:22.686 "dma_device_type": 1 00:12:22.686 }, 00:12:22.686 { 00:12:22.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.686 "dma_device_type": 2 00:12:22.686 } 00:12:22.686 ], 00:12:22.686 "driver_specific": {} 00:12:22.686 } 00:12:22.686 ] 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.686 BaseBdev4 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.686 [ 00:12:22.686 { 00:12:22.686 "name": "BaseBdev4", 00:12:22.686 "aliases": [ 00:12:22.686 "62ce5e7a-5106-48c8-8077-f4788c1d2855" 00:12:22.686 ], 00:12:22.686 "product_name": "Malloc disk", 00:12:22.686 "block_size": 512, 00:12:22.686 "num_blocks": 65536, 00:12:22.686 "uuid": "62ce5e7a-5106-48c8-8077-f4788c1d2855", 00:12:22.686 "assigned_rate_limits": { 00:12:22.686 "rw_ios_per_sec": 0, 00:12:22.686 "rw_mbytes_per_sec": 0, 00:12:22.686 "r_mbytes_per_sec": 0, 00:12:22.686 "w_mbytes_per_sec": 0 00:12:22.686 }, 00:12:22.686 "claimed": false, 00:12:22.686 "zoned": false, 00:12:22.686 "supported_io_types": { 00:12:22.686 "read": true, 00:12:22.686 "write": true, 00:12:22.686 "unmap": true, 00:12:22.686 "flush": true, 00:12:22.686 "reset": true, 00:12:22.686 "nvme_admin": false, 00:12:22.686 "nvme_io": false, 00:12:22.686 "nvme_io_md": false, 00:12:22.686 "write_zeroes": true, 00:12:22.686 "zcopy": true, 00:12:22.686 "get_zone_info": false, 00:12:22.686 "zone_management": false, 00:12:22.686 "zone_append": false, 00:12:22.686 "compare": false, 00:12:22.686 "compare_and_write": false, 00:12:22.686 "abort": true, 00:12:22.686 "seek_hole": false, 00:12:22.686 "seek_data": false, 00:12:22.686 "copy": true, 00:12:22.686 "nvme_iov_md": false 00:12:22.686 }, 00:12:22.686 "memory_domains": [ 00:12:22.686 { 00:12:22.686 "dma_device_id": "system", 00:12:22.686 "dma_device_type": 1 00:12:22.686 }, 00:12:22.686 { 00:12:22.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.686 "dma_device_type": 2 00:12:22.686 } 00:12:22.686 ], 00:12:22.686 "driver_specific": {} 00:12:22.686 } 00:12:22.686 ] 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.686 [2024-11-05 16:26:35.674105] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:22.686 [2024-11-05 16:26:35.674240] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:22.686 [2024-11-05 16:26:35.674304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:22.686 [2024-11-05 16:26:35.676705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:22.686 [2024-11-05 16:26:35.676817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.686 "name": "Existed_Raid", 00:12:22.686 "uuid": "6748c9f1-89bd-4def-bc39-028b4311e1aa", 00:12:22.686 "strip_size_kb": 64, 00:12:22.686 "state": "configuring", 00:12:22.686 "raid_level": "raid0", 00:12:22.686 "superblock": true, 00:12:22.686 "num_base_bdevs": 4, 00:12:22.686 "num_base_bdevs_discovered": 3, 00:12:22.686 "num_base_bdevs_operational": 4, 00:12:22.686 "base_bdevs_list": [ 00:12:22.686 { 00:12:22.686 "name": "BaseBdev1", 00:12:22.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.686 "is_configured": false, 00:12:22.686 "data_offset": 0, 00:12:22.686 "data_size": 0 00:12:22.686 }, 00:12:22.686 { 00:12:22.686 "name": "BaseBdev2", 00:12:22.686 "uuid": "6709559b-321c-4fd2-9cc8-6f6981b8ffc2", 00:12:22.686 "is_configured": true, 00:12:22.686 "data_offset": 2048, 00:12:22.686 "data_size": 63488 00:12:22.686 }, 00:12:22.686 { 00:12:22.686 "name": "BaseBdev3", 00:12:22.686 "uuid": "ca1e033d-c61f-4792-903c-cc7b71abdf83", 00:12:22.686 "is_configured": true, 00:12:22.686 "data_offset": 2048, 00:12:22.686 "data_size": 63488 00:12:22.686 }, 00:12:22.686 { 00:12:22.686 "name": "BaseBdev4", 00:12:22.686 "uuid": "62ce5e7a-5106-48c8-8077-f4788c1d2855", 00:12:22.686 "is_configured": true, 00:12:22.686 "data_offset": 2048, 00:12:22.686 "data_size": 63488 00:12:22.686 } 00:12:22.686 ] 00:12:22.686 }' 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.686 16:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.254 16:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:23.254 16:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.254 16:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.254 [2024-11-05 16:26:36.065504] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:23.254 16:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.254 16:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:23.254 16:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.254 16:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.254 16:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:23.254 16:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:23.254 16:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:23.254 16:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.254 16:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.254 16:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.254 16:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.254 16:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.254 16:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.254 16:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.254 16:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.254 16:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.254 16:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.254 "name": "Existed_Raid", 00:12:23.254 "uuid": "6748c9f1-89bd-4def-bc39-028b4311e1aa", 00:12:23.254 "strip_size_kb": 64, 00:12:23.254 "state": "configuring", 00:12:23.254 "raid_level": "raid0", 00:12:23.254 "superblock": true, 00:12:23.254 "num_base_bdevs": 4, 00:12:23.254 "num_base_bdevs_discovered": 2, 00:12:23.254 "num_base_bdevs_operational": 4, 00:12:23.254 "base_bdevs_list": [ 00:12:23.254 { 00:12:23.254 "name": "BaseBdev1", 00:12:23.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.254 "is_configured": false, 00:12:23.254 "data_offset": 0, 00:12:23.254 "data_size": 0 00:12:23.254 }, 00:12:23.254 { 00:12:23.254 "name": null, 00:12:23.254 "uuid": "6709559b-321c-4fd2-9cc8-6f6981b8ffc2", 00:12:23.254 "is_configured": false, 00:12:23.254 "data_offset": 0, 00:12:23.254 "data_size": 63488 00:12:23.254 }, 00:12:23.254 { 00:12:23.254 "name": "BaseBdev3", 00:12:23.254 "uuid": "ca1e033d-c61f-4792-903c-cc7b71abdf83", 00:12:23.254 "is_configured": true, 00:12:23.254 "data_offset": 2048, 00:12:23.254 "data_size": 63488 00:12:23.254 }, 00:12:23.254 { 00:12:23.254 "name": "BaseBdev4", 00:12:23.254 "uuid": "62ce5e7a-5106-48c8-8077-f4788c1d2855", 00:12:23.254 "is_configured": true, 00:12:23.254 "data_offset": 2048, 00:12:23.254 "data_size": 63488 00:12:23.254 } 00:12:23.254 ] 00:12:23.254 }' 00:12:23.254 16:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.254 16:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.512 16:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.512 16:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:23.512 16:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.512 16:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.512 16:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.512 16:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:23.512 16:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:23.512 16:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.512 16:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.771 [2024-11-05 16:26:36.638188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:23.771 BaseBdev1 00:12:23.771 16:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.771 16:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:23.771 16:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:23.771 16:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:23.771 16:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:23.771 16:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:23.771 16:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:23.771 16:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:23.771 16:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.771 16:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.771 16:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.771 16:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:23.771 16:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.771 16:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.771 [ 00:12:23.771 { 00:12:23.771 "name": "BaseBdev1", 00:12:23.771 "aliases": [ 00:12:23.771 "8e879496-789c-40e5-a56a-3485c7a25450" 00:12:23.771 ], 00:12:23.771 "product_name": "Malloc disk", 00:12:23.771 "block_size": 512, 00:12:23.771 "num_blocks": 65536, 00:12:23.771 "uuid": "8e879496-789c-40e5-a56a-3485c7a25450", 00:12:23.771 "assigned_rate_limits": { 00:12:23.771 "rw_ios_per_sec": 0, 00:12:23.771 "rw_mbytes_per_sec": 0, 00:12:23.771 "r_mbytes_per_sec": 0, 00:12:23.771 "w_mbytes_per_sec": 0 00:12:23.771 }, 00:12:23.771 "claimed": true, 00:12:23.771 "claim_type": "exclusive_write", 00:12:23.771 "zoned": false, 00:12:23.771 "supported_io_types": { 00:12:23.771 "read": true, 00:12:23.771 "write": true, 00:12:23.771 "unmap": true, 00:12:23.771 "flush": true, 00:12:23.771 "reset": true, 00:12:23.771 "nvme_admin": false, 00:12:23.771 "nvme_io": false, 00:12:23.771 "nvme_io_md": false, 00:12:23.771 "write_zeroes": true, 00:12:23.771 "zcopy": true, 00:12:23.771 "get_zone_info": false, 00:12:23.771 "zone_management": false, 00:12:23.771 "zone_append": false, 00:12:23.771 "compare": false, 00:12:23.771 "compare_and_write": false, 00:12:23.771 "abort": true, 00:12:23.771 "seek_hole": false, 00:12:23.771 "seek_data": false, 00:12:23.771 "copy": true, 00:12:23.771 "nvme_iov_md": false 00:12:23.771 }, 00:12:23.771 "memory_domains": [ 00:12:23.771 { 00:12:23.771 "dma_device_id": "system", 00:12:23.771 "dma_device_type": 1 00:12:23.771 }, 00:12:23.771 { 00:12:23.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.771 "dma_device_type": 2 00:12:23.771 } 00:12:23.771 ], 00:12:23.771 "driver_specific": {} 00:12:23.771 } 00:12:23.771 ] 00:12:23.771 16:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.771 16:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:23.771 16:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:23.771 16:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.771 16:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.771 16:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:23.771 16:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:23.771 16:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:23.771 16:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.771 16:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.771 16:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.771 16:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.771 16:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.771 16:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.771 16:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.771 16:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.771 16:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.771 16:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.771 "name": "Existed_Raid", 00:12:23.771 "uuid": "6748c9f1-89bd-4def-bc39-028b4311e1aa", 00:12:23.771 "strip_size_kb": 64, 00:12:23.771 "state": "configuring", 00:12:23.771 "raid_level": "raid0", 00:12:23.771 "superblock": true, 00:12:23.771 "num_base_bdevs": 4, 00:12:23.771 "num_base_bdevs_discovered": 3, 00:12:23.771 "num_base_bdevs_operational": 4, 00:12:23.771 "base_bdevs_list": [ 00:12:23.771 { 00:12:23.771 "name": "BaseBdev1", 00:12:23.771 "uuid": "8e879496-789c-40e5-a56a-3485c7a25450", 00:12:23.771 "is_configured": true, 00:12:23.771 "data_offset": 2048, 00:12:23.771 "data_size": 63488 00:12:23.771 }, 00:12:23.771 { 00:12:23.771 "name": null, 00:12:23.771 "uuid": "6709559b-321c-4fd2-9cc8-6f6981b8ffc2", 00:12:23.771 "is_configured": false, 00:12:23.771 "data_offset": 0, 00:12:23.771 "data_size": 63488 00:12:23.771 }, 00:12:23.771 { 00:12:23.771 "name": "BaseBdev3", 00:12:23.771 "uuid": "ca1e033d-c61f-4792-903c-cc7b71abdf83", 00:12:23.771 "is_configured": true, 00:12:23.771 "data_offset": 2048, 00:12:23.771 "data_size": 63488 00:12:23.771 }, 00:12:23.771 { 00:12:23.771 "name": "BaseBdev4", 00:12:23.771 "uuid": "62ce5e7a-5106-48c8-8077-f4788c1d2855", 00:12:23.771 "is_configured": true, 00:12:23.771 "data_offset": 2048, 00:12:23.771 "data_size": 63488 00:12:23.771 } 00:12:23.771 ] 00:12:23.771 }' 00:12:23.771 16:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.771 16:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.339 16:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.339 16:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:24.339 16:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.339 16:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.339 16:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.339 16:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:24.339 16:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:24.339 16:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.339 16:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.339 [2024-11-05 16:26:37.229326] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:24.339 16:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.339 16:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:24.340 16:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:24.340 16:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:24.340 16:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:24.340 16:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:24.340 16:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:24.340 16:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.340 16:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.340 16:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.340 16:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.340 16:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.340 16:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.340 16:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.340 16:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.340 16:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.340 16:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.340 "name": "Existed_Raid", 00:12:24.340 "uuid": "6748c9f1-89bd-4def-bc39-028b4311e1aa", 00:12:24.340 "strip_size_kb": 64, 00:12:24.340 "state": "configuring", 00:12:24.340 "raid_level": "raid0", 00:12:24.340 "superblock": true, 00:12:24.340 "num_base_bdevs": 4, 00:12:24.340 "num_base_bdevs_discovered": 2, 00:12:24.340 "num_base_bdevs_operational": 4, 00:12:24.340 "base_bdevs_list": [ 00:12:24.340 { 00:12:24.340 "name": "BaseBdev1", 00:12:24.340 "uuid": "8e879496-789c-40e5-a56a-3485c7a25450", 00:12:24.340 "is_configured": true, 00:12:24.340 "data_offset": 2048, 00:12:24.340 "data_size": 63488 00:12:24.340 }, 00:12:24.340 { 00:12:24.340 "name": null, 00:12:24.340 "uuid": "6709559b-321c-4fd2-9cc8-6f6981b8ffc2", 00:12:24.340 "is_configured": false, 00:12:24.340 "data_offset": 0, 00:12:24.340 "data_size": 63488 00:12:24.340 }, 00:12:24.340 { 00:12:24.340 "name": null, 00:12:24.340 "uuid": "ca1e033d-c61f-4792-903c-cc7b71abdf83", 00:12:24.340 "is_configured": false, 00:12:24.340 "data_offset": 0, 00:12:24.340 "data_size": 63488 00:12:24.340 }, 00:12:24.340 { 00:12:24.340 "name": "BaseBdev4", 00:12:24.340 "uuid": "62ce5e7a-5106-48c8-8077-f4788c1d2855", 00:12:24.340 "is_configured": true, 00:12:24.340 "data_offset": 2048, 00:12:24.340 "data_size": 63488 00:12:24.340 } 00:12:24.340 ] 00:12:24.340 }' 00:12:24.340 16:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.340 16:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.907 16:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.907 16:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.907 16:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.907 16:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:24.907 16:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.907 16:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:24.907 16:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:24.907 16:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.907 16:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.907 [2024-11-05 16:26:37.804705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:24.907 16:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.907 16:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:24.907 16:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:24.907 16:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:24.907 16:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:24.907 16:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:24.907 16:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:24.907 16:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.907 16:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.907 16:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.907 16:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.907 16:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.907 16:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.907 16:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.907 16:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.907 16:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.907 16:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.907 "name": "Existed_Raid", 00:12:24.907 "uuid": "6748c9f1-89bd-4def-bc39-028b4311e1aa", 00:12:24.907 "strip_size_kb": 64, 00:12:24.907 "state": "configuring", 00:12:24.907 "raid_level": "raid0", 00:12:24.907 "superblock": true, 00:12:24.907 "num_base_bdevs": 4, 00:12:24.907 "num_base_bdevs_discovered": 3, 00:12:24.907 "num_base_bdevs_operational": 4, 00:12:24.907 "base_bdevs_list": [ 00:12:24.907 { 00:12:24.907 "name": "BaseBdev1", 00:12:24.907 "uuid": "8e879496-789c-40e5-a56a-3485c7a25450", 00:12:24.907 "is_configured": true, 00:12:24.907 "data_offset": 2048, 00:12:24.907 "data_size": 63488 00:12:24.907 }, 00:12:24.907 { 00:12:24.907 "name": null, 00:12:24.907 "uuid": "6709559b-321c-4fd2-9cc8-6f6981b8ffc2", 00:12:24.908 "is_configured": false, 00:12:24.908 "data_offset": 0, 00:12:24.908 "data_size": 63488 00:12:24.908 }, 00:12:24.908 { 00:12:24.908 "name": "BaseBdev3", 00:12:24.908 "uuid": "ca1e033d-c61f-4792-903c-cc7b71abdf83", 00:12:24.908 "is_configured": true, 00:12:24.908 "data_offset": 2048, 00:12:24.908 "data_size": 63488 00:12:24.908 }, 00:12:24.908 { 00:12:24.908 "name": "BaseBdev4", 00:12:24.908 "uuid": "62ce5e7a-5106-48c8-8077-f4788c1d2855", 00:12:24.908 "is_configured": true, 00:12:24.908 "data_offset": 2048, 00:12:24.908 "data_size": 63488 00:12:24.908 } 00:12:24.908 ] 00:12:24.908 }' 00:12:24.908 16:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.908 16:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.474 16:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:25.474 16:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.474 16:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.474 16:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.474 16:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.474 16:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:25.474 16:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:25.474 16:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.474 16:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.474 [2024-11-05 16:26:38.300098] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:25.474 16:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.474 16:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:25.474 16:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:25.474 16:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:25.474 16:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:25.474 16:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:25.474 16:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:25.474 16:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.474 16:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.474 16:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.474 16:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.474 16:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.474 16:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.474 16:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:25.474 16:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.474 16:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.474 16:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.474 "name": "Existed_Raid", 00:12:25.474 "uuid": "6748c9f1-89bd-4def-bc39-028b4311e1aa", 00:12:25.474 "strip_size_kb": 64, 00:12:25.474 "state": "configuring", 00:12:25.474 "raid_level": "raid0", 00:12:25.474 "superblock": true, 00:12:25.474 "num_base_bdevs": 4, 00:12:25.474 "num_base_bdevs_discovered": 2, 00:12:25.474 "num_base_bdevs_operational": 4, 00:12:25.474 "base_bdevs_list": [ 00:12:25.474 { 00:12:25.474 "name": null, 00:12:25.474 "uuid": "8e879496-789c-40e5-a56a-3485c7a25450", 00:12:25.474 "is_configured": false, 00:12:25.474 "data_offset": 0, 00:12:25.474 "data_size": 63488 00:12:25.474 }, 00:12:25.474 { 00:12:25.474 "name": null, 00:12:25.475 "uuid": "6709559b-321c-4fd2-9cc8-6f6981b8ffc2", 00:12:25.475 "is_configured": false, 00:12:25.475 "data_offset": 0, 00:12:25.475 "data_size": 63488 00:12:25.475 }, 00:12:25.475 { 00:12:25.475 "name": "BaseBdev3", 00:12:25.475 "uuid": "ca1e033d-c61f-4792-903c-cc7b71abdf83", 00:12:25.475 "is_configured": true, 00:12:25.475 "data_offset": 2048, 00:12:25.475 "data_size": 63488 00:12:25.475 }, 00:12:25.475 { 00:12:25.475 "name": "BaseBdev4", 00:12:25.475 "uuid": "62ce5e7a-5106-48c8-8077-f4788c1d2855", 00:12:25.475 "is_configured": true, 00:12:25.475 "data_offset": 2048, 00:12:25.475 "data_size": 63488 00:12:25.475 } 00:12:25.475 ] 00:12:25.475 }' 00:12:25.475 16:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.475 16:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.040 16:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.040 16:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:26.040 16:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.040 16:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.040 16:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.040 16:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:26.040 16:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:26.040 16:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.040 16:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.040 [2024-11-05 16:26:38.938334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:26.040 16:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.040 16:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:26.040 16:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:26.040 16:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:26.040 16:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:26.040 16:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:26.040 16:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:26.040 16:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.040 16:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.040 16:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.040 16:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.040 16:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:26.040 16:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.040 16:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.040 16:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.040 16:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.040 16:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.040 "name": "Existed_Raid", 00:12:26.040 "uuid": "6748c9f1-89bd-4def-bc39-028b4311e1aa", 00:12:26.040 "strip_size_kb": 64, 00:12:26.040 "state": "configuring", 00:12:26.040 "raid_level": "raid0", 00:12:26.040 "superblock": true, 00:12:26.040 "num_base_bdevs": 4, 00:12:26.040 "num_base_bdevs_discovered": 3, 00:12:26.040 "num_base_bdevs_operational": 4, 00:12:26.040 "base_bdevs_list": [ 00:12:26.040 { 00:12:26.040 "name": null, 00:12:26.040 "uuid": "8e879496-789c-40e5-a56a-3485c7a25450", 00:12:26.040 "is_configured": false, 00:12:26.040 "data_offset": 0, 00:12:26.040 "data_size": 63488 00:12:26.040 }, 00:12:26.040 { 00:12:26.040 "name": "BaseBdev2", 00:12:26.040 "uuid": "6709559b-321c-4fd2-9cc8-6f6981b8ffc2", 00:12:26.040 "is_configured": true, 00:12:26.040 "data_offset": 2048, 00:12:26.040 "data_size": 63488 00:12:26.040 }, 00:12:26.040 { 00:12:26.040 "name": "BaseBdev3", 00:12:26.040 "uuid": "ca1e033d-c61f-4792-903c-cc7b71abdf83", 00:12:26.040 "is_configured": true, 00:12:26.040 "data_offset": 2048, 00:12:26.040 "data_size": 63488 00:12:26.040 }, 00:12:26.040 { 00:12:26.040 "name": "BaseBdev4", 00:12:26.040 "uuid": "62ce5e7a-5106-48c8-8077-f4788c1d2855", 00:12:26.040 "is_configured": true, 00:12:26.040 "data_offset": 2048, 00:12:26.040 "data_size": 63488 00:12:26.040 } 00:12:26.040 ] 00:12:26.040 }' 00:12:26.040 16:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.040 16:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.607 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.607 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:26.607 16:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.607 16:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.607 16:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.607 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:26.607 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:26.607 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.607 16:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.607 16:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.607 16:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.607 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8e879496-789c-40e5-a56a-3485c7a25450 00:12:26.607 16:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.607 16:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.607 [2024-11-05 16:26:39.537904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:26.607 [2024-11-05 16:26:39.538268] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:26.607 [2024-11-05 16:26:39.538325] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:26.607 [2024-11-05 16:26:39.538649] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:26.607 NewBaseBdev 00:12:26.607 [2024-11-05 16:26:39.538877] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:26.607 [2024-11-05 16:26:39.538895] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:26.607 [2024-11-05 16:26:39.539044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:26.607 16:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.607 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:26.607 16:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:12:26.607 16:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:26.607 16:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:26.607 16:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:26.607 16:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:26.607 16:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:26.607 16:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.607 16:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.607 16:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.607 16:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:26.607 16:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.608 16:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.608 [ 00:12:26.608 { 00:12:26.608 "name": "NewBaseBdev", 00:12:26.608 "aliases": [ 00:12:26.608 "8e879496-789c-40e5-a56a-3485c7a25450" 00:12:26.608 ], 00:12:26.608 "product_name": "Malloc disk", 00:12:26.608 "block_size": 512, 00:12:26.608 "num_blocks": 65536, 00:12:26.608 "uuid": "8e879496-789c-40e5-a56a-3485c7a25450", 00:12:26.608 "assigned_rate_limits": { 00:12:26.608 "rw_ios_per_sec": 0, 00:12:26.608 "rw_mbytes_per_sec": 0, 00:12:26.608 "r_mbytes_per_sec": 0, 00:12:26.608 "w_mbytes_per_sec": 0 00:12:26.608 }, 00:12:26.608 "claimed": true, 00:12:26.608 "claim_type": "exclusive_write", 00:12:26.608 "zoned": false, 00:12:26.608 "supported_io_types": { 00:12:26.608 "read": true, 00:12:26.608 "write": true, 00:12:26.608 "unmap": true, 00:12:26.608 "flush": true, 00:12:26.608 "reset": true, 00:12:26.608 "nvme_admin": false, 00:12:26.608 "nvme_io": false, 00:12:26.608 "nvme_io_md": false, 00:12:26.608 "write_zeroes": true, 00:12:26.608 "zcopy": true, 00:12:26.608 "get_zone_info": false, 00:12:26.608 "zone_management": false, 00:12:26.608 "zone_append": false, 00:12:26.608 "compare": false, 00:12:26.608 "compare_and_write": false, 00:12:26.608 "abort": true, 00:12:26.608 "seek_hole": false, 00:12:26.608 "seek_data": false, 00:12:26.608 "copy": true, 00:12:26.608 "nvme_iov_md": false 00:12:26.608 }, 00:12:26.608 "memory_domains": [ 00:12:26.608 { 00:12:26.608 "dma_device_id": "system", 00:12:26.608 "dma_device_type": 1 00:12:26.608 }, 00:12:26.608 { 00:12:26.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.608 "dma_device_type": 2 00:12:26.608 } 00:12:26.608 ], 00:12:26.608 "driver_specific": {} 00:12:26.608 } 00:12:26.608 ] 00:12:26.608 16:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.608 16:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:26.608 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:26.608 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:26.608 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:26.608 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:26.608 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:26.608 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:26.608 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.608 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.608 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.608 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.608 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:26.608 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.608 16:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.608 16:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.608 16:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.608 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.608 "name": "Existed_Raid", 00:12:26.608 "uuid": "6748c9f1-89bd-4def-bc39-028b4311e1aa", 00:12:26.608 "strip_size_kb": 64, 00:12:26.608 "state": "online", 00:12:26.608 "raid_level": "raid0", 00:12:26.608 "superblock": true, 00:12:26.608 "num_base_bdevs": 4, 00:12:26.608 "num_base_bdevs_discovered": 4, 00:12:26.608 "num_base_bdevs_operational": 4, 00:12:26.608 "base_bdevs_list": [ 00:12:26.608 { 00:12:26.608 "name": "NewBaseBdev", 00:12:26.608 "uuid": "8e879496-789c-40e5-a56a-3485c7a25450", 00:12:26.608 "is_configured": true, 00:12:26.608 "data_offset": 2048, 00:12:26.608 "data_size": 63488 00:12:26.608 }, 00:12:26.608 { 00:12:26.608 "name": "BaseBdev2", 00:12:26.608 "uuid": "6709559b-321c-4fd2-9cc8-6f6981b8ffc2", 00:12:26.608 "is_configured": true, 00:12:26.608 "data_offset": 2048, 00:12:26.608 "data_size": 63488 00:12:26.608 }, 00:12:26.608 { 00:12:26.608 "name": "BaseBdev3", 00:12:26.608 "uuid": "ca1e033d-c61f-4792-903c-cc7b71abdf83", 00:12:26.608 "is_configured": true, 00:12:26.608 "data_offset": 2048, 00:12:26.608 "data_size": 63488 00:12:26.608 }, 00:12:26.608 { 00:12:26.608 "name": "BaseBdev4", 00:12:26.608 "uuid": "62ce5e7a-5106-48c8-8077-f4788c1d2855", 00:12:26.608 "is_configured": true, 00:12:26.608 "data_offset": 2048, 00:12:26.608 "data_size": 63488 00:12:26.608 } 00:12:26.608 ] 00:12:26.608 }' 00:12:26.608 16:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.608 16:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.173 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:27.173 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:27.173 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:27.173 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:27.173 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:27.173 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:27.173 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:27.173 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:27.173 16:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.173 16:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.173 [2024-11-05 16:26:40.081478] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:27.173 16:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.173 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:27.173 "name": "Existed_Raid", 00:12:27.173 "aliases": [ 00:12:27.173 "6748c9f1-89bd-4def-bc39-028b4311e1aa" 00:12:27.173 ], 00:12:27.173 "product_name": "Raid Volume", 00:12:27.173 "block_size": 512, 00:12:27.173 "num_blocks": 253952, 00:12:27.173 "uuid": "6748c9f1-89bd-4def-bc39-028b4311e1aa", 00:12:27.173 "assigned_rate_limits": { 00:12:27.173 "rw_ios_per_sec": 0, 00:12:27.173 "rw_mbytes_per_sec": 0, 00:12:27.173 "r_mbytes_per_sec": 0, 00:12:27.173 "w_mbytes_per_sec": 0 00:12:27.173 }, 00:12:27.173 "claimed": false, 00:12:27.173 "zoned": false, 00:12:27.173 "supported_io_types": { 00:12:27.173 "read": true, 00:12:27.173 "write": true, 00:12:27.173 "unmap": true, 00:12:27.173 "flush": true, 00:12:27.173 "reset": true, 00:12:27.173 "nvme_admin": false, 00:12:27.173 "nvme_io": false, 00:12:27.173 "nvme_io_md": false, 00:12:27.173 "write_zeroes": true, 00:12:27.173 "zcopy": false, 00:12:27.173 "get_zone_info": false, 00:12:27.173 "zone_management": false, 00:12:27.173 "zone_append": false, 00:12:27.173 "compare": false, 00:12:27.173 "compare_and_write": false, 00:12:27.173 "abort": false, 00:12:27.173 "seek_hole": false, 00:12:27.173 "seek_data": false, 00:12:27.173 "copy": false, 00:12:27.173 "nvme_iov_md": false 00:12:27.173 }, 00:12:27.173 "memory_domains": [ 00:12:27.173 { 00:12:27.173 "dma_device_id": "system", 00:12:27.173 "dma_device_type": 1 00:12:27.173 }, 00:12:27.173 { 00:12:27.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.173 "dma_device_type": 2 00:12:27.173 }, 00:12:27.173 { 00:12:27.173 "dma_device_id": "system", 00:12:27.173 "dma_device_type": 1 00:12:27.173 }, 00:12:27.173 { 00:12:27.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.173 "dma_device_type": 2 00:12:27.173 }, 00:12:27.173 { 00:12:27.173 "dma_device_id": "system", 00:12:27.173 "dma_device_type": 1 00:12:27.173 }, 00:12:27.173 { 00:12:27.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.173 "dma_device_type": 2 00:12:27.173 }, 00:12:27.173 { 00:12:27.173 "dma_device_id": "system", 00:12:27.173 "dma_device_type": 1 00:12:27.173 }, 00:12:27.173 { 00:12:27.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.173 "dma_device_type": 2 00:12:27.173 } 00:12:27.173 ], 00:12:27.173 "driver_specific": { 00:12:27.173 "raid": { 00:12:27.173 "uuid": "6748c9f1-89bd-4def-bc39-028b4311e1aa", 00:12:27.173 "strip_size_kb": 64, 00:12:27.173 "state": "online", 00:12:27.173 "raid_level": "raid0", 00:12:27.173 "superblock": true, 00:12:27.173 "num_base_bdevs": 4, 00:12:27.173 "num_base_bdevs_discovered": 4, 00:12:27.173 "num_base_bdevs_operational": 4, 00:12:27.173 "base_bdevs_list": [ 00:12:27.173 { 00:12:27.173 "name": "NewBaseBdev", 00:12:27.173 "uuid": "8e879496-789c-40e5-a56a-3485c7a25450", 00:12:27.173 "is_configured": true, 00:12:27.173 "data_offset": 2048, 00:12:27.173 "data_size": 63488 00:12:27.173 }, 00:12:27.173 { 00:12:27.173 "name": "BaseBdev2", 00:12:27.173 "uuid": "6709559b-321c-4fd2-9cc8-6f6981b8ffc2", 00:12:27.173 "is_configured": true, 00:12:27.173 "data_offset": 2048, 00:12:27.173 "data_size": 63488 00:12:27.173 }, 00:12:27.173 { 00:12:27.173 "name": "BaseBdev3", 00:12:27.173 "uuid": "ca1e033d-c61f-4792-903c-cc7b71abdf83", 00:12:27.173 "is_configured": true, 00:12:27.173 "data_offset": 2048, 00:12:27.173 "data_size": 63488 00:12:27.173 }, 00:12:27.173 { 00:12:27.173 "name": "BaseBdev4", 00:12:27.173 "uuid": "62ce5e7a-5106-48c8-8077-f4788c1d2855", 00:12:27.173 "is_configured": true, 00:12:27.173 "data_offset": 2048, 00:12:27.173 "data_size": 63488 00:12:27.174 } 00:12:27.174 ] 00:12:27.174 } 00:12:27.174 } 00:12:27.174 }' 00:12:27.174 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:27.174 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:27.174 BaseBdev2 00:12:27.174 BaseBdev3 00:12:27.174 BaseBdev4' 00:12:27.174 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.174 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:27.174 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.174 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.174 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:27.174 16:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.174 16:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.174 16:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.174 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:27.174 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:27.174 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.431 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:27.431 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.431 16:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.431 16:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.431 16:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.431 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:27.431 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:27.431 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.431 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:27.431 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.431 16:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.431 16:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.431 16:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.431 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:27.431 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:27.431 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.431 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:27.431 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.431 16:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.431 16:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.431 16:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.431 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:27.431 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:27.431 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:27.431 16:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.431 16:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.431 [2024-11-05 16:26:40.416662] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:27.431 [2024-11-05 16:26:40.416741] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:27.431 [2024-11-05 16:26:40.416850] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:27.431 [2024-11-05 16:26:40.416925] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:27.431 [2024-11-05 16:26:40.416937] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:27.431 16:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.431 16:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70340 00:12:27.431 16:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 70340 ']' 00:12:27.431 16:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 70340 00:12:27.431 16:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:12:27.431 16:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:27.431 16:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70340 00:12:27.431 killing process with pid 70340 00:12:27.431 16:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:27.431 16:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:27.431 16:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70340' 00:12:27.431 16:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 70340 00:12:27.431 16:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 70340 00:12:27.431 [2024-11-05 16:26:40.452112] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:27.996 [2024-11-05 16:26:40.874131] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:29.374 16:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:29.374 00:12:29.374 real 0m12.299s 00:12:29.374 user 0m19.557s 00:12:29.374 sys 0m2.211s 00:12:29.374 ************************************ 00:12:29.374 END TEST raid_state_function_test_sb 00:12:29.374 ************************************ 00:12:29.374 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:29.374 16:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.374 16:26:42 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:12:29.374 16:26:42 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:29.374 16:26:42 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:29.374 16:26:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:29.374 ************************************ 00:12:29.374 START TEST raid_superblock_test 00:12:29.374 ************************************ 00:12:29.374 16:26:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 4 00:12:29.374 16:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:12:29.374 16:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:29.375 16:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:29.375 16:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:29.375 16:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:29.375 16:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:29.375 16:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:29.375 16:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:29.375 16:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:29.375 16:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:29.375 16:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:29.375 16:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:29.375 16:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:29.375 16:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:12:29.375 16:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:29.375 16:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:29.375 16:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=71018 00:12:29.375 16:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:29.375 16:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 71018 00:12:29.375 16:26:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 71018 ']' 00:12:29.375 16:26:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.375 16:26:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:29.375 16:26:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.375 16:26:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:29.375 16:26:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.375 [2024-11-05 16:26:42.205258] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:12:29.375 [2024-11-05 16:26:42.206007] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71018 ] 00:12:29.375 [2024-11-05 16:26:42.381212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.633 [2024-11-05 16:26:42.507708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.634 [2024-11-05 16:26:42.719382] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:29.634 [2024-11-05 16:26:42.719508] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.202 malloc1 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.202 [2024-11-05 16:26:43.134059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:30.202 [2024-11-05 16:26:43.134188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.202 [2024-11-05 16:26:43.134235] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:30.202 [2024-11-05 16:26:43.134269] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.202 [2024-11-05 16:26:43.136584] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.202 [2024-11-05 16:26:43.136684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:30.202 pt1 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.202 malloc2 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.202 [2024-11-05 16:26:43.193564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:30.202 [2024-11-05 16:26:43.193622] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.202 [2024-11-05 16:26:43.193643] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:30.202 [2024-11-05 16:26:43.193653] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.202 [2024-11-05 16:26:43.195837] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.202 [2024-11-05 16:26:43.195884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:30.202 pt2 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.202 malloc3 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.202 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.202 [2024-11-05 16:26:43.261894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:30.202 [2024-11-05 16:26:43.262005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.203 [2024-11-05 16:26:43.262051] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:30.203 [2024-11-05 16:26:43.262106] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.203 [2024-11-05 16:26:43.264373] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.203 [2024-11-05 16:26:43.264447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:30.203 pt3 00:12:30.203 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.203 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:30.203 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:30.203 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:30.203 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:30.203 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:30.203 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:30.203 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:30.203 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:30.203 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:30.203 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.203 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.462 malloc4 00:12:30.462 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.462 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:30.462 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.462 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.462 [2024-11-05 16:26:43.319516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:30.462 [2024-11-05 16:26:43.319640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.462 [2024-11-05 16:26:43.319679] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:30.462 [2024-11-05 16:26:43.319708] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.462 [2024-11-05 16:26:43.322179] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.462 [2024-11-05 16:26:43.322258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:30.462 pt4 00:12:30.462 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.462 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:30.462 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:30.462 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:30.462 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.462 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.462 [2024-11-05 16:26:43.331578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:30.462 [2024-11-05 16:26:43.333876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:30.462 [2024-11-05 16:26:43.334024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:30.462 [2024-11-05 16:26:43.334122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:30.462 [2024-11-05 16:26:43.334406] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:30.462 [2024-11-05 16:26:43.334462] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:30.462 [2024-11-05 16:26:43.334845] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:30.462 [2024-11-05 16:26:43.335105] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:30.462 [2024-11-05 16:26:43.335160] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:30.462 [2024-11-05 16:26:43.335461] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.462 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.462 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:30.462 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.462 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.462 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:30.462 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:30.462 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:30.462 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.462 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.462 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.462 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.462 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.462 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.462 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.462 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.462 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.462 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.462 "name": "raid_bdev1", 00:12:30.462 "uuid": "d6a181e0-c0bd-4ae0-91f1-da64935243ed", 00:12:30.462 "strip_size_kb": 64, 00:12:30.462 "state": "online", 00:12:30.462 "raid_level": "raid0", 00:12:30.462 "superblock": true, 00:12:30.462 "num_base_bdevs": 4, 00:12:30.462 "num_base_bdevs_discovered": 4, 00:12:30.462 "num_base_bdevs_operational": 4, 00:12:30.462 "base_bdevs_list": [ 00:12:30.462 { 00:12:30.462 "name": "pt1", 00:12:30.462 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:30.462 "is_configured": true, 00:12:30.463 "data_offset": 2048, 00:12:30.463 "data_size": 63488 00:12:30.463 }, 00:12:30.463 { 00:12:30.463 "name": "pt2", 00:12:30.463 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:30.463 "is_configured": true, 00:12:30.463 "data_offset": 2048, 00:12:30.463 "data_size": 63488 00:12:30.463 }, 00:12:30.463 { 00:12:30.463 "name": "pt3", 00:12:30.463 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:30.463 "is_configured": true, 00:12:30.463 "data_offset": 2048, 00:12:30.463 "data_size": 63488 00:12:30.463 }, 00:12:30.463 { 00:12:30.463 "name": "pt4", 00:12:30.463 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:30.463 "is_configured": true, 00:12:30.463 "data_offset": 2048, 00:12:30.463 "data_size": 63488 00:12:30.463 } 00:12:30.463 ] 00:12:30.463 }' 00:12:30.463 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.463 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.722 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:30.722 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:30.722 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:30.722 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:30.722 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:30.722 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:30.722 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:30.722 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:30.722 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.722 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.981 [2024-11-05 16:26:43.819033] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:30.981 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.981 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:30.981 "name": "raid_bdev1", 00:12:30.981 "aliases": [ 00:12:30.981 "d6a181e0-c0bd-4ae0-91f1-da64935243ed" 00:12:30.981 ], 00:12:30.981 "product_name": "Raid Volume", 00:12:30.981 "block_size": 512, 00:12:30.981 "num_blocks": 253952, 00:12:30.981 "uuid": "d6a181e0-c0bd-4ae0-91f1-da64935243ed", 00:12:30.981 "assigned_rate_limits": { 00:12:30.981 "rw_ios_per_sec": 0, 00:12:30.981 "rw_mbytes_per_sec": 0, 00:12:30.981 "r_mbytes_per_sec": 0, 00:12:30.981 "w_mbytes_per_sec": 0 00:12:30.981 }, 00:12:30.981 "claimed": false, 00:12:30.981 "zoned": false, 00:12:30.981 "supported_io_types": { 00:12:30.981 "read": true, 00:12:30.981 "write": true, 00:12:30.981 "unmap": true, 00:12:30.981 "flush": true, 00:12:30.981 "reset": true, 00:12:30.981 "nvme_admin": false, 00:12:30.981 "nvme_io": false, 00:12:30.981 "nvme_io_md": false, 00:12:30.981 "write_zeroes": true, 00:12:30.981 "zcopy": false, 00:12:30.981 "get_zone_info": false, 00:12:30.981 "zone_management": false, 00:12:30.981 "zone_append": false, 00:12:30.981 "compare": false, 00:12:30.981 "compare_and_write": false, 00:12:30.981 "abort": false, 00:12:30.981 "seek_hole": false, 00:12:30.981 "seek_data": false, 00:12:30.981 "copy": false, 00:12:30.981 "nvme_iov_md": false 00:12:30.981 }, 00:12:30.981 "memory_domains": [ 00:12:30.981 { 00:12:30.981 "dma_device_id": "system", 00:12:30.981 "dma_device_type": 1 00:12:30.981 }, 00:12:30.981 { 00:12:30.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.981 "dma_device_type": 2 00:12:30.981 }, 00:12:30.981 { 00:12:30.981 "dma_device_id": "system", 00:12:30.981 "dma_device_type": 1 00:12:30.981 }, 00:12:30.981 { 00:12:30.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.981 "dma_device_type": 2 00:12:30.981 }, 00:12:30.981 { 00:12:30.981 "dma_device_id": "system", 00:12:30.981 "dma_device_type": 1 00:12:30.981 }, 00:12:30.981 { 00:12:30.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.981 "dma_device_type": 2 00:12:30.981 }, 00:12:30.981 { 00:12:30.981 "dma_device_id": "system", 00:12:30.981 "dma_device_type": 1 00:12:30.981 }, 00:12:30.981 { 00:12:30.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.981 "dma_device_type": 2 00:12:30.981 } 00:12:30.981 ], 00:12:30.981 "driver_specific": { 00:12:30.981 "raid": { 00:12:30.981 "uuid": "d6a181e0-c0bd-4ae0-91f1-da64935243ed", 00:12:30.981 "strip_size_kb": 64, 00:12:30.981 "state": "online", 00:12:30.981 "raid_level": "raid0", 00:12:30.981 "superblock": true, 00:12:30.981 "num_base_bdevs": 4, 00:12:30.981 "num_base_bdevs_discovered": 4, 00:12:30.981 "num_base_bdevs_operational": 4, 00:12:30.981 "base_bdevs_list": [ 00:12:30.981 { 00:12:30.981 "name": "pt1", 00:12:30.981 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:30.981 "is_configured": true, 00:12:30.981 "data_offset": 2048, 00:12:30.981 "data_size": 63488 00:12:30.981 }, 00:12:30.981 { 00:12:30.982 "name": "pt2", 00:12:30.982 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:30.982 "is_configured": true, 00:12:30.982 "data_offset": 2048, 00:12:30.982 "data_size": 63488 00:12:30.982 }, 00:12:30.982 { 00:12:30.982 "name": "pt3", 00:12:30.982 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:30.982 "is_configured": true, 00:12:30.982 "data_offset": 2048, 00:12:30.982 "data_size": 63488 00:12:30.982 }, 00:12:30.982 { 00:12:30.982 "name": "pt4", 00:12:30.982 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:30.982 "is_configured": true, 00:12:30.982 "data_offset": 2048, 00:12:30.982 "data_size": 63488 00:12:30.982 } 00:12:30.982 ] 00:12:30.982 } 00:12:30.982 } 00:12:30.982 }' 00:12:30.982 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:30.982 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:30.982 pt2 00:12:30.982 pt3 00:12:30.982 pt4' 00:12:30.982 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:30.982 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:30.982 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:30.982 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:30.982 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:30.982 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.982 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.982 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.982 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:30.982 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:30.982 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:30.982 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:30.982 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.982 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.982 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:30.982 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.982 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:30.982 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:30.982 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:30.982 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:30.982 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.982 16:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.982 16:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:30.982 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.982 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:30.982 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:30.982 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:30.982 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:30.982 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.982 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.982 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:30.982 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.242 [2024-11-05 16:26:44.094582] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d6a181e0-c0bd-4ae0-91f1-da64935243ed 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d6a181e0-c0bd-4ae0-91f1-da64935243ed ']' 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.242 [2024-11-05 16:26:44.138112] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:31.242 [2024-11-05 16:26:44.138139] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:31.242 [2024-11-05 16:26:44.138228] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:31.242 [2024-11-05 16:26:44.138299] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:31.242 [2024-11-05 16:26:44.138314] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.242 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.242 [2024-11-05 16:26:44.301892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:31.242 [2024-11-05 16:26:44.304053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:31.242 [2024-11-05 16:26:44.304106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:31.242 [2024-11-05 16:26:44.304144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:31.243 [2024-11-05 16:26:44.304199] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:31.243 [2024-11-05 16:26:44.304256] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:31.243 [2024-11-05 16:26:44.304278] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:31.243 [2024-11-05 16:26:44.304299] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:31.243 [2024-11-05 16:26:44.304313] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:31.243 [2024-11-05 16:26:44.304328] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:31.243 request: 00:12:31.243 { 00:12:31.243 "name": "raid_bdev1", 00:12:31.243 "raid_level": "raid0", 00:12:31.243 "base_bdevs": [ 00:12:31.243 "malloc1", 00:12:31.243 "malloc2", 00:12:31.243 "malloc3", 00:12:31.243 "malloc4" 00:12:31.243 ], 00:12:31.243 "strip_size_kb": 64, 00:12:31.243 "superblock": false, 00:12:31.243 "method": "bdev_raid_create", 00:12:31.243 "req_id": 1 00:12:31.243 } 00:12:31.243 Got JSON-RPC error response 00:12:31.243 response: 00:12:31.243 { 00:12:31.243 "code": -17, 00:12:31.243 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:31.243 } 00:12:31.243 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:31.243 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:31.243 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:31.243 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:31.243 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:31.243 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.243 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:31.243 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.243 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.243 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.502 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:31.502 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:31.502 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:31.502 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.502 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.502 [2024-11-05 16:26:44.369767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:31.502 [2024-11-05 16:26:44.369890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.502 [2024-11-05 16:26:44.369929] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:31.502 [2024-11-05 16:26:44.369965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.502 [2024-11-05 16:26:44.372430] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.502 [2024-11-05 16:26:44.372547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:31.502 [2024-11-05 16:26:44.372678] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:31.502 [2024-11-05 16:26:44.372785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:31.502 pt1 00:12:31.502 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.502 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:12:31.502 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.502 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:31.502 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:31.502 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:31.502 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:31.502 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.502 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.502 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.502 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.502 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.502 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.502 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.502 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.502 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.502 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.502 "name": "raid_bdev1", 00:12:31.502 "uuid": "d6a181e0-c0bd-4ae0-91f1-da64935243ed", 00:12:31.502 "strip_size_kb": 64, 00:12:31.502 "state": "configuring", 00:12:31.502 "raid_level": "raid0", 00:12:31.502 "superblock": true, 00:12:31.502 "num_base_bdevs": 4, 00:12:31.502 "num_base_bdevs_discovered": 1, 00:12:31.502 "num_base_bdevs_operational": 4, 00:12:31.502 "base_bdevs_list": [ 00:12:31.502 { 00:12:31.502 "name": "pt1", 00:12:31.502 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:31.502 "is_configured": true, 00:12:31.502 "data_offset": 2048, 00:12:31.502 "data_size": 63488 00:12:31.502 }, 00:12:31.502 { 00:12:31.502 "name": null, 00:12:31.502 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:31.502 "is_configured": false, 00:12:31.502 "data_offset": 2048, 00:12:31.502 "data_size": 63488 00:12:31.502 }, 00:12:31.502 { 00:12:31.502 "name": null, 00:12:31.502 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:31.502 "is_configured": false, 00:12:31.502 "data_offset": 2048, 00:12:31.502 "data_size": 63488 00:12:31.502 }, 00:12:31.502 { 00:12:31.502 "name": null, 00:12:31.502 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:31.502 "is_configured": false, 00:12:31.502 "data_offset": 2048, 00:12:31.503 "data_size": 63488 00:12:31.503 } 00:12:31.503 ] 00:12:31.503 }' 00:12:31.503 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.503 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.761 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:31.761 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:31.761 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.761 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.761 [2024-11-05 16:26:44.829001] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:31.761 [2024-11-05 16:26:44.829159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.761 [2024-11-05 16:26:44.829187] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:31.761 [2024-11-05 16:26:44.829200] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.761 [2024-11-05 16:26:44.829714] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.761 [2024-11-05 16:26:44.829747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:31.761 [2024-11-05 16:26:44.829841] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:31.761 [2024-11-05 16:26:44.829870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:31.761 pt2 00:12:31.761 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.761 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:31.761 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.761 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.761 [2024-11-05 16:26:44.840989] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:31.761 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.761 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:12:31.762 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.762 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:31.762 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:31.762 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:31.762 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:31.762 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.762 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.762 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.762 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.021 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.021 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.021 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.021 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.021 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.021 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.021 "name": "raid_bdev1", 00:12:32.021 "uuid": "d6a181e0-c0bd-4ae0-91f1-da64935243ed", 00:12:32.021 "strip_size_kb": 64, 00:12:32.021 "state": "configuring", 00:12:32.021 "raid_level": "raid0", 00:12:32.021 "superblock": true, 00:12:32.021 "num_base_bdevs": 4, 00:12:32.021 "num_base_bdevs_discovered": 1, 00:12:32.021 "num_base_bdevs_operational": 4, 00:12:32.021 "base_bdevs_list": [ 00:12:32.021 { 00:12:32.021 "name": "pt1", 00:12:32.021 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:32.021 "is_configured": true, 00:12:32.021 "data_offset": 2048, 00:12:32.021 "data_size": 63488 00:12:32.021 }, 00:12:32.021 { 00:12:32.021 "name": null, 00:12:32.021 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:32.021 "is_configured": false, 00:12:32.021 "data_offset": 0, 00:12:32.021 "data_size": 63488 00:12:32.021 }, 00:12:32.021 { 00:12:32.021 "name": null, 00:12:32.021 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:32.021 "is_configured": false, 00:12:32.021 "data_offset": 2048, 00:12:32.021 "data_size": 63488 00:12:32.021 }, 00:12:32.021 { 00:12:32.021 "name": null, 00:12:32.021 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:32.021 "is_configured": false, 00:12:32.021 "data_offset": 2048, 00:12:32.021 "data_size": 63488 00:12:32.021 } 00:12:32.021 ] 00:12:32.021 }' 00:12:32.021 16:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.021 16:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.281 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:32.281 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:32.281 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:32.281 16:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.281 16:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.281 [2024-11-05 16:26:45.300327] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:32.281 [2024-11-05 16:26:45.300455] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.281 [2024-11-05 16:26:45.300548] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:32.281 [2024-11-05 16:26:45.300585] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.281 [2024-11-05 16:26:45.301129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.281 [2024-11-05 16:26:45.301196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:32.281 [2024-11-05 16:26:45.301332] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:32.281 [2024-11-05 16:26:45.301391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:32.281 pt2 00:12:32.281 16:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.281 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:32.281 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:32.281 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:32.281 16:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.281 16:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.281 [2024-11-05 16:26:45.312275] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:32.281 [2024-11-05 16:26:45.312363] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.281 [2024-11-05 16:26:45.312410] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:32.281 [2024-11-05 16:26:45.312422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.281 [2024-11-05 16:26:45.312936] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.281 [2024-11-05 16:26:45.312956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:32.281 [2024-11-05 16:26:45.313045] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:32.281 [2024-11-05 16:26:45.313066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:32.281 pt3 00:12:32.281 16:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.281 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:32.281 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:32.281 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:32.281 16:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.281 16:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.281 [2024-11-05 16:26:45.324226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:32.281 [2024-11-05 16:26:45.324277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.281 [2024-11-05 16:26:45.324313] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:32.281 [2024-11-05 16:26:45.324321] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.281 [2024-11-05 16:26:45.324784] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.281 [2024-11-05 16:26:45.324802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:32.281 [2024-11-05 16:26:45.324882] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:32.281 [2024-11-05 16:26:45.324904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:32.281 [2024-11-05 16:26:45.325071] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:32.281 [2024-11-05 16:26:45.325087] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:32.281 [2024-11-05 16:26:45.325337] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:32.281 [2024-11-05 16:26:45.325521] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:32.281 [2024-11-05 16:26:45.325557] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:32.281 [2024-11-05 16:26:45.325728] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.281 pt4 00:12:32.281 16:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.281 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:32.281 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:32.282 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:32.282 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.282 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.282 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:32.282 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:32.282 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:32.282 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.282 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.282 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.282 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.282 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.282 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.282 16:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.282 16:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.282 16:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.540 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.540 "name": "raid_bdev1", 00:12:32.540 "uuid": "d6a181e0-c0bd-4ae0-91f1-da64935243ed", 00:12:32.540 "strip_size_kb": 64, 00:12:32.540 "state": "online", 00:12:32.540 "raid_level": "raid0", 00:12:32.540 "superblock": true, 00:12:32.540 "num_base_bdevs": 4, 00:12:32.540 "num_base_bdevs_discovered": 4, 00:12:32.540 "num_base_bdevs_operational": 4, 00:12:32.540 "base_bdevs_list": [ 00:12:32.540 { 00:12:32.540 "name": "pt1", 00:12:32.540 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:32.540 "is_configured": true, 00:12:32.540 "data_offset": 2048, 00:12:32.540 "data_size": 63488 00:12:32.540 }, 00:12:32.540 { 00:12:32.540 "name": "pt2", 00:12:32.540 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:32.541 "is_configured": true, 00:12:32.541 "data_offset": 2048, 00:12:32.541 "data_size": 63488 00:12:32.541 }, 00:12:32.541 { 00:12:32.541 "name": "pt3", 00:12:32.541 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:32.541 "is_configured": true, 00:12:32.541 "data_offset": 2048, 00:12:32.541 "data_size": 63488 00:12:32.541 }, 00:12:32.541 { 00:12:32.541 "name": "pt4", 00:12:32.541 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:32.541 "is_configured": true, 00:12:32.541 "data_offset": 2048, 00:12:32.541 "data_size": 63488 00:12:32.541 } 00:12:32.541 ] 00:12:32.541 }' 00:12:32.541 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.541 16:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.799 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:32.799 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:32.799 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:32.799 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:32.799 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:32.799 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:32.799 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:32.799 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:32.799 16:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.799 16:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.799 [2024-11-05 16:26:45.835827] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:32.799 16:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.799 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:32.799 "name": "raid_bdev1", 00:12:32.799 "aliases": [ 00:12:32.799 "d6a181e0-c0bd-4ae0-91f1-da64935243ed" 00:12:32.799 ], 00:12:32.799 "product_name": "Raid Volume", 00:12:32.799 "block_size": 512, 00:12:32.799 "num_blocks": 253952, 00:12:32.799 "uuid": "d6a181e0-c0bd-4ae0-91f1-da64935243ed", 00:12:32.799 "assigned_rate_limits": { 00:12:32.799 "rw_ios_per_sec": 0, 00:12:32.799 "rw_mbytes_per_sec": 0, 00:12:32.799 "r_mbytes_per_sec": 0, 00:12:32.799 "w_mbytes_per_sec": 0 00:12:32.799 }, 00:12:32.799 "claimed": false, 00:12:32.799 "zoned": false, 00:12:32.799 "supported_io_types": { 00:12:32.799 "read": true, 00:12:32.799 "write": true, 00:12:32.799 "unmap": true, 00:12:32.799 "flush": true, 00:12:32.800 "reset": true, 00:12:32.800 "nvme_admin": false, 00:12:32.800 "nvme_io": false, 00:12:32.800 "nvme_io_md": false, 00:12:32.800 "write_zeroes": true, 00:12:32.800 "zcopy": false, 00:12:32.800 "get_zone_info": false, 00:12:32.800 "zone_management": false, 00:12:32.800 "zone_append": false, 00:12:32.800 "compare": false, 00:12:32.800 "compare_and_write": false, 00:12:32.800 "abort": false, 00:12:32.800 "seek_hole": false, 00:12:32.800 "seek_data": false, 00:12:32.800 "copy": false, 00:12:32.800 "nvme_iov_md": false 00:12:32.800 }, 00:12:32.800 "memory_domains": [ 00:12:32.800 { 00:12:32.800 "dma_device_id": "system", 00:12:32.800 "dma_device_type": 1 00:12:32.800 }, 00:12:32.800 { 00:12:32.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.800 "dma_device_type": 2 00:12:32.800 }, 00:12:32.800 { 00:12:32.800 "dma_device_id": "system", 00:12:32.800 "dma_device_type": 1 00:12:32.800 }, 00:12:32.800 { 00:12:32.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.800 "dma_device_type": 2 00:12:32.800 }, 00:12:32.800 { 00:12:32.800 "dma_device_id": "system", 00:12:32.800 "dma_device_type": 1 00:12:32.800 }, 00:12:32.800 { 00:12:32.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.800 "dma_device_type": 2 00:12:32.800 }, 00:12:32.800 { 00:12:32.800 "dma_device_id": "system", 00:12:32.800 "dma_device_type": 1 00:12:32.800 }, 00:12:32.800 { 00:12:32.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.800 "dma_device_type": 2 00:12:32.800 } 00:12:32.800 ], 00:12:32.800 "driver_specific": { 00:12:32.800 "raid": { 00:12:32.800 "uuid": "d6a181e0-c0bd-4ae0-91f1-da64935243ed", 00:12:32.800 "strip_size_kb": 64, 00:12:32.800 "state": "online", 00:12:32.800 "raid_level": "raid0", 00:12:32.800 "superblock": true, 00:12:32.800 "num_base_bdevs": 4, 00:12:32.800 "num_base_bdevs_discovered": 4, 00:12:32.800 "num_base_bdevs_operational": 4, 00:12:32.800 "base_bdevs_list": [ 00:12:32.800 { 00:12:32.800 "name": "pt1", 00:12:32.800 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:32.800 "is_configured": true, 00:12:32.800 "data_offset": 2048, 00:12:32.800 "data_size": 63488 00:12:32.800 }, 00:12:32.800 { 00:12:32.800 "name": "pt2", 00:12:32.800 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:32.800 "is_configured": true, 00:12:32.800 "data_offset": 2048, 00:12:32.800 "data_size": 63488 00:12:32.800 }, 00:12:32.800 { 00:12:32.800 "name": "pt3", 00:12:32.800 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:32.800 "is_configured": true, 00:12:32.800 "data_offset": 2048, 00:12:32.800 "data_size": 63488 00:12:32.800 }, 00:12:32.800 { 00:12:32.800 "name": "pt4", 00:12:32.800 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:32.800 "is_configured": true, 00:12:32.800 "data_offset": 2048, 00:12:32.800 "data_size": 63488 00:12:32.800 } 00:12:32.800 ] 00:12:32.800 } 00:12:32.800 } 00:12:32.800 }' 00:12:32.800 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:33.060 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:33.060 pt2 00:12:33.060 pt3 00:12:33.060 pt4' 00:12:33.060 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:33.060 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:33.060 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:33.060 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:33.060 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:33.060 16:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.060 16:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.060 16:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.060 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:33.060 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:33.060 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:33.060 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:33.060 16:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:33.060 16:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.060 16:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.060 16:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.060 16:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:33.060 16:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:33.060 16:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:33.060 16:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:33.060 16:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.060 16:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.060 16:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:33.060 16:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.060 16:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:33.060 16:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:33.060 16:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:33.060 16:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:33.060 16:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.060 16:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:33.060 16:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.060 16:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.060 16:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:33.060 16:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:33.060 16:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:33.060 16:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:33.060 16:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.060 16:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.060 [2024-11-05 16:26:46.143263] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:33.320 16:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.320 16:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d6a181e0-c0bd-4ae0-91f1-da64935243ed '!=' d6a181e0-c0bd-4ae0-91f1-da64935243ed ']' 00:12:33.320 16:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:12:33.320 16:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:33.320 16:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:33.320 16:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 71018 00:12:33.320 16:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 71018 ']' 00:12:33.320 16:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 71018 00:12:33.320 16:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:12:33.320 16:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:33.320 16:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71018 00:12:33.320 16:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:33.320 16:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:33.320 16:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71018' 00:12:33.320 killing process with pid 71018 00:12:33.320 16:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 71018 00:12:33.320 [2024-11-05 16:26:46.209060] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:33.320 [2024-11-05 16:26:46.209239] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:33.320 16:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 71018 00:12:33.320 [2024-11-05 16:26:46.209359] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:33.320 [2024-11-05 16:26:46.209372] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:33.579 [2024-11-05 16:26:46.668322] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:34.959 ************************************ 00:12:34.959 END TEST raid_superblock_test 00:12:34.959 ************************************ 00:12:34.959 16:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:34.959 00:12:34.959 real 0m5.833s 00:12:34.959 user 0m8.272s 00:12:34.959 sys 0m0.988s 00:12:34.959 16:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:34.959 16:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.959 16:26:47 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:12:34.959 16:26:47 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:34.959 16:26:47 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:34.959 16:26:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:34.959 ************************************ 00:12:34.959 START TEST raid_read_error_test 00:12:34.959 ************************************ 00:12:34.959 16:26:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 4 read 00:12:34.959 16:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:34.959 16:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:34.959 16:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:34.959 16:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:34.959 16:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:34.959 16:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:34.959 16:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:34.960 16:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:34.960 16:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:34.960 16:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:34.960 16:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:34.960 16:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:34.960 16:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:34.960 16:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:34.960 16:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:34.960 16:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:34.960 16:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:34.960 16:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:34.960 16:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:34.960 16:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:34.960 16:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:34.960 16:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:34.960 16:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:34.960 16:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:34.960 16:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:34.960 16:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:34.960 16:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:34.960 16:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:34.960 16:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.hOgR2DTS0N 00:12:34.960 16:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71289 00:12:34.960 16:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:34.960 16:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71289 00:12:34.960 16:26:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 71289 ']' 00:12:34.960 16:26:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.960 16:26:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:34.960 16:26:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.960 16:26:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:34.960 16:26:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.218 [2024-11-05 16:26:48.125475] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:12:35.218 [2024-11-05 16:26:48.125714] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71289 ] 00:12:35.218 [2024-11-05 16:26:48.308584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.476 [2024-11-05 16:26:48.432222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.734 [2024-11-05 16:26:48.665132] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:35.734 [2024-11-05 16:26:48.665311] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:35.992 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:35.993 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:12:35.993 16:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:35.993 16:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:35.993 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.993 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.993 BaseBdev1_malloc 00:12:35.993 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.993 16:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:35.993 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.993 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.993 true 00:12:35.993 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.993 16:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:35.993 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.993 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.993 [2024-11-05 16:26:49.070137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:35.993 [2024-11-05 16:26:49.070217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.993 [2024-11-05 16:26:49.070241] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:35.993 [2024-11-05 16:26:49.070254] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.993 [2024-11-05 16:26:49.072748] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.993 [2024-11-05 16:26:49.072849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:35.993 BaseBdev1 00:12:35.993 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.993 16:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:35.993 16:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:35.993 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.993 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.251 BaseBdev2_malloc 00:12:36.251 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.251 16:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:36.251 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.251 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.251 true 00:12:36.251 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.251 16:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:36.251 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.251 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.251 [2024-11-05 16:26:49.140444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:36.251 [2024-11-05 16:26:49.140552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.251 [2024-11-05 16:26:49.140590] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:36.251 [2024-11-05 16:26:49.140602] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.251 [2024-11-05 16:26:49.143163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.251 [2024-11-05 16:26:49.143266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:36.251 BaseBdev2 00:12:36.251 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.251 16:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:36.251 16:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:36.251 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.251 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.251 BaseBdev3_malloc 00:12:36.251 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.251 16:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:36.251 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.251 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.251 true 00:12:36.251 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.251 16:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:36.251 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.252 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.252 [2024-11-05 16:26:49.223225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:36.252 [2024-11-05 16:26:49.223364] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.252 [2024-11-05 16:26:49.223396] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:36.252 [2024-11-05 16:26:49.223407] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.252 [2024-11-05 16:26:49.225998] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.252 [2024-11-05 16:26:49.226046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:36.252 BaseBdev3 00:12:36.252 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.252 16:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:36.252 16:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:36.252 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.252 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.252 BaseBdev4_malloc 00:12:36.252 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.252 16:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:36.252 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.252 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.252 true 00:12:36.252 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.252 16:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:36.252 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.252 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.252 [2024-11-05 16:26:49.293339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:36.252 [2024-11-05 16:26:49.293496] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.252 [2024-11-05 16:26:49.293555] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:36.252 [2024-11-05 16:26:49.293570] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.252 [2024-11-05 16:26:49.296010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.252 [2024-11-05 16:26:49.296061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:36.252 BaseBdev4 00:12:36.252 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.252 16:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:36.252 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.252 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.252 [2024-11-05 16:26:49.305432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:36.252 [2024-11-05 16:26:49.307811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:36.252 [2024-11-05 16:26:49.307917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:36.252 [2024-11-05 16:26:49.307989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:36.252 [2024-11-05 16:26:49.308249] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:36.252 [2024-11-05 16:26:49.308266] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:36.252 [2024-11-05 16:26:49.308630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:36.252 [2024-11-05 16:26:49.308846] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:36.252 [2024-11-05 16:26:49.308858] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:36.252 [2024-11-05 16:26:49.309076] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:36.252 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.252 16:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:36.252 16:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.252 16:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.252 16:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:36.252 16:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:36.252 16:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:36.252 16:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.252 16:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.252 16:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.252 16:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.252 16:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.252 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.252 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.252 16:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.252 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.510 16:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.510 "name": "raid_bdev1", 00:12:36.510 "uuid": "eb3b1f46-d005-4df3-a170-e475f4a24e8e", 00:12:36.510 "strip_size_kb": 64, 00:12:36.510 "state": "online", 00:12:36.510 "raid_level": "raid0", 00:12:36.510 "superblock": true, 00:12:36.510 "num_base_bdevs": 4, 00:12:36.510 "num_base_bdevs_discovered": 4, 00:12:36.510 "num_base_bdevs_operational": 4, 00:12:36.510 "base_bdevs_list": [ 00:12:36.510 { 00:12:36.510 "name": "BaseBdev1", 00:12:36.510 "uuid": "c5d32041-62cd-521c-b221-a1873de5585d", 00:12:36.510 "is_configured": true, 00:12:36.510 "data_offset": 2048, 00:12:36.510 "data_size": 63488 00:12:36.510 }, 00:12:36.510 { 00:12:36.510 "name": "BaseBdev2", 00:12:36.510 "uuid": "e6e60f5e-28fa-56c6-837a-95a3e3514055", 00:12:36.510 "is_configured": true, 00:12:36.510 "data_offset": 2048, 00:12:36.510 "data_size": 63488 00:12:36.510 }, 00:12:36.510 { 00:12:36.510 "name": "BaseBdev3", 00:12:36.510 "uuid": "29f110d4-939a-59a5-bcd9-24be79d77c73", 00:12:36.510 "is_configured": true, 00:12:36.510 "data_offset": 2048, 00:12:36.510 "data_size": 63488 00:12:36.510 }, 00:12:36.510 { 00:12:36.510 "name": "BaseBdev4", 00:12:36.510 "uuid": "55f1d312-4ab5-5458-8702-ec4cdca04c4e", 00:12:36.510 "is_configured": true, 00:12:36.510 "data_offset": 2048, 00:12:36.510 "data_size": 63488 00:12:36.510 } 00:12:36.510 ] 00:12:36.510 }' 00:12:36.510 16:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.510 16:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.767 16:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:36.767 16:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:37.024 [2024-11-05 16:26:49.894280] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:37.965 16:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:37.965 16:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.965 16:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.965 16:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.965 16:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:37.965 16:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:37.965 16:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:37.965 16:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:37.965 16:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.965 16:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.965 16:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:37.965 16:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:37.965 16:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:37.965 16:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.965 16:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.965 16:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.965 16:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.965 16:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.965 16:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.965 16:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.965 16:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.965 16:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.965 16:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.965 "name": "raid_bdev1", 00:12:37.965 "uuid": "eb3b1f46-d005-4df3-a170-e475f4a24e8e", 00:12:37.965 "strip_size_kb": 64, 00:12:37.965 "state": "online", 00:12:37.965 "raid_level": "raid0", 00:12:37.965 "superblock": true, 00:12:37.965 "num_base_bdevs": 4, 00:12:37.965 "num_base_bdevs_discovered": 4, 00:12:37.965 "num_base_bdevs_operational": 4, 00:12:37.965 "base_bdevs_list": [ 00:12:37.965 { 00:12:37.965 "name": "BaseBdev1", 00:12:37.965 "uuid": "c5d32041-62cd-521c-b221-a1873de5585d", 00:12:37.965 "is_configured": true, 00:12:37.965 "data_offset": 2048, 00:12:37.965 "data_size": 63488 00:12:37.965 }, 00:12:37.965 { 00:12:37.965 "name": "BaseBdev2", 00:12:37.965 "uuid": "e6e60f5e-28fa-56c6-837a-95a3e3514055", 00:12:37.965 "is_configured": true, 00:12:37.965 "data_offset": 2048, 00:12:37.965 "data_size": 63488 00:12:37.965 }, 00:12:37.965 { 00:12:37.965 "name": "BaseBdev3", 00:12:37.965 "uuid": "29f110d4-939a-59a5-bcd9-24be79d77c73", 00:12:37.965 "is_configured": true, 00:12:37.965 "data_offset": 2048, 00:12:37.965 "data_size": 63488 00:12:37.965 }, 00:12:37.965 { 00:12:37.965 "name": "BaseBdev4", 00:12:37.965 "uuid": "55f1d312-4ab5-5458-8702-ec4cdca04c4e", 00:12:37.965 "is_configured": true, 00:12:37.965 "data_offset": 2048, 00:12:37.965 "data_size": 63488 00:12:37.965 } 00:12:37.965 ] 00:12:37.965 }' 00:12:37.965 16:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.965 16:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.225 16:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:38.225 16:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.225 16:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.225 [2024-11-05 16:26:51.291573] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:38.225 [2024-11-05 16:26:51.291629] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:38.225 [2024-11-05 16:26:51.294901] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:38.225 [2024-11-05 16:26:51.294972] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.225 [2024-11-05 16:26:51.295023] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:38.225 [2024-11-05 16:26:51.295036] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:38.225 { 00:12:38.225 "results": [ 00:12:38.225 { 00:12:38.225 "job": "raid_bdev1", 00:12:38.225 "core_mask": "0x1", 00:12:38.225 "workload": "randrw", 00:12:38.225 "percentage": 50, 00:12:38.225 "status": "finished", 00:12:38.225 "queue_depth": 1, 00:12:38.225 "io_size": 131072, 00:12:38.225 "runtime": 1.397742, 00:12:38.225 "iops": 13465.289016141749, 00:12:38.225 "mibps": 1683.1611270177186, 00:12:38.225 "io_failed": 1, 00:12:38.225 "io_timeout": 0, 00:12:38.225 "avg_latency_us": 103.11165852094479, 00:12:38.225 "min_latency_us": 29.289082969432314, 00:12:38.225 "max_latency_us": 1624.0908296943232 00:12:38.225 } 00:12:38.225 ], 00:12:38.225 "core_count": 1 00:12:38.225 } 00:12:38.225 16:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.225 16:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71289 00:12:38.225 16:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 71289 ']' 00:12:38.225 16:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 71289 00:12:38.225 16:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:12:38.225 16:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:38.225 16:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71289 00:12:38.485 killing process with pid 71289 00:12:38.485 16:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:38.485 16:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:38.485 16:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71289' 00:12:38.485 16:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 71289 00:12:38.485 [2024-11-05 16:26:51.332627] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:38.485 16:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 71289 00:12:38.743 [2024-11-05 16:26:51.721041] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:40.120 16:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.hOgR2DTS0N 00:12:40.120 16:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:40.121 16:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:40.121 16:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:12:40.121 16:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:40.121 ************************************ 00:12:40.121 END TEST raid_read_error_test 00:12:40.121 ************************************ 00:12:40.121 16:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:40.121 16:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:40.121 16:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:12:40.121 00:12:40.121 real 0m5.032s 00:12:40.121 user 0m5.913s 00:12:40.121 sys 0m0.662s 00:12:40.121 16:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:40.121 16:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.121 16:26:53 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:12:40.121 16:26:53 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:40.121 16:26:53 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:40.121 16:26:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:40.121 ************************************ 00:12:40.121 START TEST raid_write_error_test 00:12:40.121 ************************************ 00:12:40.121 16:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 4 write 00:12:40.121 16:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:40.121 16:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:40.121 16:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:40.121 16:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:40.121 16:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:40.121 16:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:40.121 16:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:40.121 16:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:40.121 16:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:40.121 16:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:40.121 16:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:40.121 16:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:40.121 16:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:40.121 16:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:40.121 16:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:40.121 16:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:40.121 16:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:40.121 16:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:40.121 16:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:40.121 16:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:40.121 16:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:40.121 16:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:40.121 16:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:40.121 16:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:40.121 16:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:40.121 16:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:40.121 16:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:40.121 16:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:40.121 16:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.z68sqihQtr 00:12:40.121 16:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71437 00:12:40.121 16:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71437 00:12:40.121 16:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:40.121 16:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 71437 ']' 00:12:40.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.121 16:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.121 16:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:40.121 16:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.121 16:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:40.121 16:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.380 [2024-11-05 16:26:53.223911] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:12:40.380 [2024-11-05 16:26:53.224059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71437 ] 00:12:40.380 [2024-11-05 16:26:53.403615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.639 [2024-11-05 16:26:53.528776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.899 [2024-11-05 16:26:53.750057] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:40.899 [2024-11-05 16:26:53.750147] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:41.157 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:41.157 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:12:41.157 16:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:41.157 16:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:41.157 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.157 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.157 BaseBdev1_malloc 00:12:41.157 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.157 16:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:41.157 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.157 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.157 true 00:12:41.157 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.157 16:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:41.157 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.157 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.157 [2024-11-05 16:26:54.158458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:41.157 [2024-11-05 16:26:54.158539] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.157 [2024-11-05 16:26:54.158563] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:41.157 [2024-11-05 16:26:54.158576] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.157 [2024-11-05 16:26:54.160968] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.157 [2024-11-05 16:26:54.161081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:41.157 BaseBdev1 00:12:41.157 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.157 16:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:41.157 16:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:41.157 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.157 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.157 BaseBdev2_malloc 00:12:41.157 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.157 16:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:41.157 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.157 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.157 true 00:12:41.157 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.157 16:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:41.157 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.157 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.157 [2024-11-05 16:26:54.231600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:41.157 [2024-11-05 16:26:54.231668] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.157 [2024-11-05 16:26:54.231694] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:41.157 [2024-11-05 16:26:54.231707] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.157 [2024-11-05 16:26:54.234087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.157 [2024-11-05 16:26:54.234188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:41.157 BaseBdev2 00:12:41.157 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.157 16:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:41.157 16:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:41.157 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.157 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.417 BaseBdev3_malloc 00:12:41.417 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.417 16:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:41.417 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.417 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.417 true 00:12:41.417 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.417 16:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:41.417 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.417 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.417 [2024-11-05 16:26:54.316840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:41.417 [2024-11-05 16:26:54.316910] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.417 [2024-11-05 16:26:54.316932] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:41.417 [2024-11-05 16:26:54.316944] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.417 [2024-11-05 16:26:54.319374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.417 [2024-11-05 16:26:54.319474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:41.417 BaseBdev3 00:12:41.417 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.417 16:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:41.417 16:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:41.417 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.417 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.417 BaseBdev4_malloc 00:12:41.417 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.417 16:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:41.417 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.417 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.417 true 00:12:41.417 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.417 16:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:41.417 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.417 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.417 [2024-11-05 16:26:54.386124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:41.417 [2024-11-05 16:26:54.386187] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.417 [2024-11-05 16:26:54.386209] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:41.417 [2024-11-05 16:26:54.386221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.417 [2024-11-05 16:26:54.388587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.417 [2024-11-05 16:26:54.388678] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:41.417 BaseBdev4 00:12:41.417 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.418 16:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:41.418 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.418 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.418 [2024-11-05 16:26:54.398160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:41.418 [2024-11-05 16:26:54.400149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:41.418 [2024-11-05 16:26:54.400292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:41.418 [2024-11-05 16:26:54.400377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:41.418 [2024-11-05 16:26:54.400685] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:41.418 [2024-11-05 16:26:54.400709] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:41.418 [2024-11-05 16:26:54.401006] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:41.418 [2024-11-05 16:26:54.401196] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:41.418 [2024-11-05 16:26:54.401209] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:41.418 [2024-11-05 16:26:54.401423] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:41.418 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.418 16:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:41.418 16:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:41.418 16:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:41.418 16:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:41.418 16:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:41.418 16:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:41.418 16:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.418 16:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.418 16:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.418 16:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.418 16:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.418 16:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.418 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.418 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.418 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.418 16:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.418 "name": "raid_bdev1", 00:12:41.418 "uuid": "53ed9b90-dbc0-4cf2-bda9-e1c26ed54b56", 00:12:41.418 "strip_size_kb": 64, 00:12:41.418 "state": "online", 00:12:41.418 "raid_level": "raid0", 00:12:41.418 "superblock": true, 00:12:41.418 "num_base_bdevs": 4, 00:12:41.418 "num_base_bdevs_discovered": 4, 00:12:41.418 "num_base_bdevs_operational": 4, 00:12:41.418 "base_bdevs_list": [ 00:12:41.418 { 00:12:41.418 "name": "BaseBdev1", 00:12:41.418 "uuid": "b75aba7d-5310-584a-ab79-fcca1e503c6b", 00:12:41.418 "is_configured": true, 00:12:41.418 "data_offset": 2048, 00:12:41.418 "data_size": 63488 00:12:41.418 }, 00:12:41.418 { 00:12:41.418 "name": "BaseBdev2", 00:12:41.418 "uuid": "13f17bf6-3c6c-5acc-869e-75760007337c", 00:12:41.418 "is_configured": true, 00:12:41.418 "data_offset": 2048, 00:12:41.418 "data_size": 63488 00:12:41.418 }, 00:12:41.418 { 00:12:41.418 "name": "BaseBdev3", 00:12:41.418 "uuid": "fa28f9a9-0bd7-5f78-84bf-37ffabf8bcc6", 00:12:41.418 "is_configured": true, 00:12:41.418 "data_offset": 2048, 00:12:41.418 "data_size": 63488 00:12:41.418 }, 00:12:41.418 { 00:12:41.418 "name": "BaseBdev4", 00:12:41.418 "uuid": "03cc59bf-855c-58a3-b031-7146d61cbf78", 00:12:41.418 "is_configured": true, 00:12:41.418 "data_offset": 2048, 00:12:41.418 "data_size": 63488 00:12:41.418 } 00:12:41.418 ] 00:12:41.418 }' 00:12:41.418 16:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.418 16:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.986 16:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:41.986 16:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:41.986 [2024-11-05 16:26:54.954718] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:42.923 16:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:42.923 16:26:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.923 16:26:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.923 16:26:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.923 16:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:42.923 16:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:42.923 16:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:42.923 16:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:42.923 16:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:42.923 16:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:42.923 16:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:42.923 16:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:42.923 16:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:42.923 16:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.923 16:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.923 16:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.923 16:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.923 16:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.923 16:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.923 16:26:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.923 16:26:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.923 16:26:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.923 16:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.923 "name": "raid_bdev1", 00:12:42.923 "uuid": "53ed9b90-dbc0-4cf2-bda9-e1c26ed54b56", 00:12:42.923 "strip_size_kb": 64, 00:12:42.923 "state": "online", 00:12:42.923 "raid_level": "raid0", 00:12:42.923 "superblock": true, 00:12:42.923 "num_base_bdevs": 4, 00:12:42.923 "num_base_bdevs_discovered": 4, 00:12:42.923 "num_base_bdevs_operational": 4, 00:12:42.923 "base_bdevs_list": [ 00:12:42.923 { 00:12:42.923 "name": "BaseBdev1", 00:12:42.923 "uuid": "b75aba7d-5310-584a-ab79-fcca1e503c6b", 00:12:42.923 "is_configured": true, 00:12:42.923 "data_offset": 2048, 00:12:42.923 "data_size": 63488 00:12:42.923 }, 00:12:42.923 { 00:12:42.923 "name": "BaseBdev2", 00:12:42.923 "uuid": "13f17bf6-3c6c-5acc-869e-75760007337c", 00:12:42.923 "is_configured": true, 00:12:42.923 "data_offset": 2048, 00:12:42.923 "data_size": 63488 00:12:42.923 }, 00:12:42.923 { 00:12:42.923 "name": "BaseBdev3", 00:12:42.923 "uuid": "fa28f9a9-0bd7-5f78-84bf-37ffabf8bcc6", 00:12:42.923 "is_configured": true, 00:12:42.923 "data_offset": 2048, 00:12:42.923 "data_size": 63488 00:12:42.923 }, 00:12:42.923 { 00:12:42.923 "name": "BaseBdev4", 00:12:42.923 "uuid": "03cc59bf-855c-58a3-b031-7146d61cbf78", 00:12:42.923 "is_configured": true, 00:12:42.923 "data_offset": 2048, 00:12:42.923 "data_size": 63488 00:12:42.923 } 00:12:42.923 ] 00:12:42.923 }' 00:12:42.923 16:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.923 16:26:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.492 16:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:43.492 16:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.492 16:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.492 [2024-11-05 16:26:56.392244] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:43.492 [2024-11-05 16:26:56.392287] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:43.492 [2024-11-05 16:26:56.395465] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:43.492 [2024-11-05 16:26:56.395555] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.492 [2024-11-05 16:26:56.395606] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:43.492 [2024-11-05 16:26:56.395619] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:43.492 { 00:12:43.492 "results": [ 00:12:43.492 { 00:12:43.492 "job": "raid_bdev1", 00:12:43.492 "core_mask": "0x1", 00:12:43.492 "workload": "randrw", 00:12:43.492 "percentage": 50, 00:12:43.492 "status": "finished", 00:12:43.492 "queue_depth": 1, 00:12:43.492 "io_size": 131072, 00:12:43.492 "runtime": 1.4383, 00:12:43.492 "iops": 14441.35437669471, 00:12:43.492 "mibps": 1805.1692970868387, 00:12:43.492 "io_failed": 1, 00:12:43.492 "io_timeout": 0, 00:12:43.492 "avg_latency_us": 96.19066042043495, 00:12:43.492 "min_latency_us": 27.94759825327511, 00:12:43.492 "max_latency_us": 1717.1004366812226 00:12:43.492 } 00:12:43.492 ], 00:12:43.492 "core_count": 1 00:12:43.492 } 00:12:43.492 16:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.492 16:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71437 00:12:43.493 16:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 71437 ']' 00:12:43.493 16:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 71437 00:12:43.493 16:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:12:43.493 16:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:43.493 16:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71437 00:12:43.493 16:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:43.493 killing process with pid 71437 00:12:43.493 16:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:43.493 16:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71437' 00:12:43.493 16:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 71437 00:12:43.493 [2024-11-05 16:26:56.434105] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:43.493 16:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 71437 00:12:43.752 [2024-11-05 16:26:56.798347] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:45.157 16:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:45.157 16:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.z68sqihQtr 00:12:45.157 16:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:45.157 16:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:12:45.157 16:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:45.157 16:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:45.157 16:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:45.157 16:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:12:45.157 00:12:45.157 real 0m4.975s 00:12:45.157 user 0m5.921s 00:12:45.157 sys 0m0.582s 00:12:45.157 16:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:45.157 ************************************ 00:12:45.157 END TEST raid_write_error_test 00:12:45.157 ************************************ 00:12:45.157 16:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.158 16:26:58 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:45.158 16:26:58 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:12:45.158 16:26:58 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:45.158 16:26:58 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:45.158 16:26:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:45.158 ************************************ 00:12:45.158 START TEST raid_state_function_test 00:12:45.158 ************************************ 00:12:45.158 16:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 4 false 00:12:45.158 16:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:45.158 16:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:45.158 16:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:45.158 16:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:45.158 16:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:45.158 16:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:45.158 16:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:45.158 16:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:45.158 16:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:45.158 16:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:45.158 16:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:45.158 16:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:45.158 16:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:45.158 16:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:45.158 16:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:45.158 16:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:45.158 16:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:45.158 16:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:45.158 16:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:45.158 16:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:45.158 16:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:45.158 16:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:45.158 16:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:45.158 16:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:45.158 16:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:45.158 16:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:45.158 16:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:45.158 16:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:45.158 16:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:45.158 16:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71586 00:12:45.158 16:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:45.158 16:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71586' 00:12:45.158 Process raid pid: 71586 00:12:45.158 16:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71586 00:12:45.158 16:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 71586 ']' 00:12:45.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.158 16:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.158 16:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:45.158 16:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.158 16:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:45.158 16:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.417 [2024-11-05 16:26:58.259486] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:12:45.417 [2024-11-05 16:26:58.260046] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:45.417 [2024-11-05 16:26:58.437162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.676 [2024-11-05 16:26:58.569307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.935 [2024-11-05 16:26:58.787788] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:45.935 [2024-11-05 16:26:58.787908] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:46.194 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:46.194 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:12:46.194 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:46.194 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.194 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.194 [2024-11-05 16:26:59.130031] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:46.194 [2024-11-05 16:26:59.130151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:46.194 [2024-11-05 16:26:59.130210] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:46.194 [2024-11-05 16:26:59.130235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:46.194 [2024-11-05 16:26:59.130260] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:46.194 [2024-11-05 16:26:59.130294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:46.194 [2024-11-05 16:26:59.130322] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:46.194 [2024-11-05 16:26:59.130345] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:46.194 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.194 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:46.194 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.194 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:46.194 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:46.195 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:46.195 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:46.195 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.195 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.195 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.195 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.195 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.195 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.195 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.195 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.195 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.195 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.195 "name": "Existed_Raid", 00:12:46.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.195 "strip_size_kb": 64, 00:12:46.195 "state": "configuring", 00:12:46.195 "raid_level": "concat", 00:12:46.195 "superblock": false, 00:12:46.195 "num_base_bdevs": 4, 00:12:46.195 "num_base_bdevs_discovered": 0, 00:12:46.195 "num_base_bdevs_operational": 4, 00:12:46.195 "base_bdevs_list": [ 00:12:46.195 { 00:12:46.195 "name": "BaseBdev1", 00:12:46.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.195 "is_configured": false, 00:12:46.195 "data_offset": 0, 00:12:46.195 "data_size": 0 00:12:46.195 }, 00:12:46.195 { 00:12:46.195 "name": "BaseBdev2", 00:12:46.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.195 "is_configured": false, 00:12:46.195 "data_offset": 0, 00:12:46.195 "data_size": 0 00:12:46.195 }, 00:12:46.195 { 00:12:46.195 "name": "BaseBdev3", 00:12:46.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.195 "is_configured": false, 00:12:46.195 "data_offset": 0, 00:12:46.195 "data_size": 0 00:12:46.195 }, 00:12:46.195 { 00:12:46.195 "name": "BaseBdev4", 00:12:46.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.195 "is_configured": false, 00:12:46.195 "data_offset": 0, 00:12:46.195 "data_size": 0 00:12:46.195 } 00:12:46.195 ] 00:12:46.195 }' 00:12:46.195 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.195 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.763 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:46.763 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.763 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.763 [2024-11-05 16:26:59.609230] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:46.763 [2024-11-05 16:26:59.609371] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:46.763 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.763 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:46.763 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.763 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.763 [2024-11-05 16:26:59.621214] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:46.763 [2024-11-05 16:26:59.621270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:46.763 [2024-11-05 16:26:59.621281] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:46.763 [2024-11-05 16:26:59.621291] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:46.763 [2024-11-05 16:26:59.621298] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:46.763 [2024-11-05 16:26:59.621308] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:46.763 [2024-11-05 16:26:59.621315] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:46.763 [2024-11-05 16:26:59.621324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:46.763 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.763 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:46.763 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.763 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.763 [2024-11-05 16:26:59.670464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:46.763 BaseBdev1 00:12:46.763 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.763 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:46.763 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:46.763 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:46.763 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:46.763 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:46.763 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:46.763 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:46.763 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.763 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.763 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.763 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:46.763 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.763 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.763 [ 00:12:46.763 { 00:12:46.763 "name": "BaseBdev1", 00:12:46.763 "aliases": [ 00:12:46.763 "178be5d4-24ae-4031-82d6-6c319bc8d8cb" 00:12:46.763 ], 00:12:46.763 "product_name": "Malloc disk", 00:12:46.763 "block_size": 512, 00:12:46.763 "num_blocks": 65536, 00:12:46.763 "uuid": "178be5d4-24ae-4031-82d6-6c319bc8d8cb", 00:12:46.763 "assigned_rate_limits": { 00:12:46.763 "rw_ios_per_sec": 0, 00:12:46.763 "rw_mbytes_per_sec": 0, 00:12:46.763 "r_mbytes_per_sec": 0, 00:12:46.764 "w_mbytes_per_sec": 0 00:12:46.764 }, 00:12:46.764 "claimed": true, 00:12:46.764 "claim_type": "exclusive_write", 00:12:46.764 "zoned": false, 00:12:46.764 "supported_io_types": { 00:12:46.764 "read": true, 00:12:46.764 "write": true, 00:12:46.764 "unmap": true, 00:12:46.764 "flush": true, 00:12:46.764 "reset": true, 00:12:46.764 "nvme_admin": false, 00:12:46.764 "nvme_io": false, 00:12:46.764 "nvme_io_md": false, 00:12:46.764 "write_zeroes": true, 00:12:46.764 "zcopy": true, 00:12:46.764 "get_zone_info": false, 00:12:46.764 "zone_management": false, 00:12:46.764 "zone_append": false, 00:12:46.764 "compare": false, 00:12:46.764 "compare_and_write": false, 00:12:46.764 "abort": true, 00:12:46.764 "seek_hole": false, 00:12:46.764 "seek_data": false, 00:12:46.764 "copy": true, 00:12:46.764 "nvme_iov_md": false 00:12:46.764 }, 00:12:46.764 "memory_domains": [ 00:12:46.764 { 00:12:46.764 "dma_device_id": "system", 00:12:46.764 "dma_device_type": 1 00:12:46.764 }, 00:12:46.764 { 00:12:46.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.764 "dma_device_type": 2 00:12:46.764 } 00:12:46.764 ], 00:12:46.764 "driver_specific": {} 00:12:46.764 } 00:12:46.764 ] 00:12:46.764 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.764 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:46.764 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:46.764 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.764 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:46.764 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:46.764 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:46.764 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:46.764 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.764 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.764 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.764 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.764 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.764 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.764 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.764 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.764 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.764 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.764 "name": "Existed_Raid", 00:12:46.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.764 "strip_size_kb": 64, 00:12:46.764 "state": "configuring", 00:12:46.764 "raid_level": "concat", 00:12:46.764 "superblock": false, 00:12:46.764 "num_base_bdevs": 4, 00:12:46.764 "num_base_bdevs_discovered": 1, 00:12:46.764 "num_base_bdevs_operational": 4, 00:12:46.764 "base_bdevs_list": [ 00:12:46.764 { 00:12:46.764 "name": "BaseBdev1", 00:12:46.764 "uuid": "178be5d4-24ae-4031-82d6-6c319bc8d8cb", 00:12:46.764 "is_configured": true, 00:12:46.764 "data_offset": 0, 00:12:46.764 "data_size": 65536 00:12:46.764 }, 00:12:46.764 { 00:12:46.764 "name": "BaseBdev2", 00:12:46.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.764 "is_configured": false, 00:12:46.764 "data_offset": 0, 00:12:46.764 "data_size": 0 00:12:46.764 }, 00:12:46.764 { 00:12:46.764 "name": "BaseBdev3", 00:12:46.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.764 "is_configured": false, 00:12:46.764 "data_offset": 0, 00:12:46.764 "data_size": 0 00:12:46.764 }, 00:12:46.764 { 00:12:46.764 "name": "BaseBdev4", 00:12:46.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.764 "is_configured": false, 00:12:46.764 "data_offset": 0, 00:12:46.764 "data_size": 0 00:12:46.764 } 00:12:46.764 ] 00:12:46.764 }' 00:12:46.764 16:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.764 16:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.332 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:47.332 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.332 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.332 [2024-11-05 16:27:00.169702] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:47.332 [2024-11-05 16:27:00.169824] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:47.332 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.332 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:47.332 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.332 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.332 [2024-11-05 16:27:00.181756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:47.332 [2024-11-05 16:27:00.183754] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:47.332 [2024-11-05 16:27:00.183836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:47.332 [2024-11-05 16:27:00.183867] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:47.332 [2024-11-05 16:27:00.183892] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:47.332 [2024-11-05 16:27:00.183912] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:47.332 [2024-11-05 16:27:00.183933] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:47.332 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.332 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:47.332 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:47.332 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:47.332 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.332 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.332 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:47.332 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:47.332 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:47.332 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.332 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.332 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.332 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.332 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.332 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.332 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.332 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.332 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.332 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.332 "name": "Existed_Raid", 00:12:47.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.332 "strip_size_kb": 64, 00:12:47.332 "state": "configuring", 00:12:47.332 "raid_level": "concat", 00:12:47.332 "superblock": false, 00:12:47.332 "num_base_bdevs": 4, 00:12:47.332 "num_base_bdevs_discovered": 1, 00:12:47.332 "num_base_bdevs_operational": 4, 00:12:47.332 "base_bdevs_list": [ 00:12:47.332 { 00:12:47.332 "name": "BaseBdev1", 00:12:47.332 "uuid": "178be5d4-24ae-4031-82d6-6c319bc8d8cb", 00:12:47.332 "is_configured": true, 00:12:47.332 "data_offset": 0, 00:12:47.332 "data_size": 65536 00:12:47.332 }, 00:12:47.332 { 00:12:47.332 "name": "BaseBdev2", 00:12:47.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.332 "is_configured": false, 00:12:47.332 "data_offset": 0, 00:12:47.332 "data_size": 0 00:12:47.332 }, 00:12:47.332 { 00:12:47.332 "name": "BaseBdev3", 00:12:47.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.332 "is_configured": false, 00:12:47.332 "data_offset": 0, 00:12:47.332 "data_size": 0 00:12:47.332 }, 00:12:47.332 { 00:12:47.332 "name": "BaseBdev4", 00:12:47.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.332 "is_configured": false, 00:12:47.332 "data_offset": 0, 00:12:47.332 "data_size": 0 00:12:47.332 } 00:12:47.332 ] 00:12:47.332 }' 00:12:47.332 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.332 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.592 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:47.592 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.592 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.851 [2024-11-05 16:27:00.727265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:47.851 BaseBdev2 00:12:47.851 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.851 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:47.851 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:47.851 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:47.851 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:47.851 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:47.851 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:47.851 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:47.851 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.851 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.851 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.851 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:47.851 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.851 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.851 [ 00:12:47.851 { 00:12:47.851 "name": "BaseBdev2", 00:12:47.851 "aliases": [ 00:12:47.851 "dd40b109-3070-449a-86c2-90960a48fe20" 00:12:47.851 ], 00:12:47.851 "product_name": "Malloc disk", 00:12:47.851 "block_size": 512, 00:12:47.851 "num_blocks": 65536, 00:12:47.851 "uuid": "dd40b109-3070-449a-86c2-90960a48fe20", 00:12:47.851 "assigned_rate_limits": { 00:12:47.851 "rw_ios_per_sec": 0, 00:12:47.851 "rw_mbytes_per_sec": 0, 00:12:47.851 "r_mbytes_per_sec": 0, 00:12:47.851 "w_mbytes_per_sec": 0 00:12:47.851 }, 00:12:47.851 "claimed": true, 00:12:47.851 "claim_type": "exclusive_write", 00:12:47.851 "zoned": false, 00:12:47.851 "supported_io_types": { 00:12:47.851 "read": true, 00:12:47.851 "write": true, 00:12:47.851 "unmap": true, 00:12:47.851 "flush": true, 00:12:47.851 "reset": true, 00:12:47.851 "nvme_admin": false, 00:12:47.851 "nvme_io": false, 00:12:47.851 "nvme_io_md": false, 00:12:47.851 "write_zeroes": true, 00:12:47.851 "zcopy": true, 00:12:47.851 "get_zone_info": false, 00:12:47.851 "zone_management": false, 00:12:47.851 "zone_append": false, 00:12:47.851 "compare": false, 00:12:47.851 "compare_and_write": false, 00:12:47.851 "abort": true, 00:12:47.851 "seek_hole": false, 00:12:47.851 "seek_data": false, 00:12:47.851 "copy": true, 00:12:47.851 "nvme_iov_md": false 00:12:47.851 }, 00:12:47.851 "memory_domains": [ 00:12:47.851 { 00:12:47.851 "dma_device_id": "system", 00:12:47.851 "dma_device_type": 1 00:12:47.851 }, 00:12:47.851 { 00:12:47.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.851 "dma_device_type": 2 00:12:47.851 } 00:12:47.851 ], 00:12:47.851 "driver_specific": {} 00:12:47.851 } 00:12:47.851 ] 00:12:47.851 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.851 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:47.851 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:47.851 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:47.851 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:47.851 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.851 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.851 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:47.851 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:47.851 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:47.851 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.851 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.851 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.851 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.851 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.851 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.851 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.851 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.851 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.851 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.851 "name": "Existed_Raid", 00:12:47.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.851 "strip_size_kb": 64, 00:12:47.851 "state": "configuring", 00:12:47.851 "raid_level": "concat", 00:12:47.851 "superblock": false, 00:12:47.851 "num_base_bdevs": 4, 00:12:47.851 "num_base_bdevs_discovered": 2, 00:12:47.851 "num_base_bdevs_operational": 4, 00:12:47.851 "base_bdevs_list": [ 00:12:47.851 { 00:12:47.851 "name": "BaseBdev1", 00:12:47.851 "uuid": "178be5d4-24ae-4031-82d6-6c319bc8d8cb", 00:12:47.851 "is_configured": true, 00:12:47.851 "data_offset": 0, 00:12:47.851 "data_size": 65536 00:12:47.851 }, 00:12:47.851 { 00:12:47.851 "name": "BaseBdev2", 00:12:47.851 "uuid": "dd40b109-3070-449a-86c2-90960a48fe20", 00:12:47.851 "is_configured": true, 00:12:47.851 "data_offset": 0, 00:12:47.851 "data_size": 65536 00:12:47.851 }, 00:12:47.851 { 00:12:47.851 "name": "BaseBdev3", 00:12:47.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.851 "is_configured": false, 00:12:47.851 "data_offset": 0, 00:12:47.851 "data_size": 0 00:12:47.851 }, 00:12:47.851 { 00:12:47.851 "name": "BaseBdev4", 00:12:47.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.851 "is_configured": false, 00:12:47.851 "data_offset": 0, 00:12:47.851 "data_size": 0 00:12:47.851 } 00:12:47.851 ] 00:12:47.851 }' 00:12:47.851 16:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.851 16:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.422 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:48.422 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.422 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.422 [2024-11-05 16:27:01.295938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:48.422 BaseBdev3 00:12:48.422 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.422 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:48.422 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:48.422 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:48.422 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:48.422 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:48.422 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:48.422 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:48.422 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.422 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.422 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.422 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:48.422 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.422 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.422 [ 00:12:48.422 { 00:12:48.422 "name": "BaseBdev3", 00:12:48.422 "aliases": [ 00:12:48.422 "9ce5fd27-1c40-4f41-94e9-54ec6ba2b8e2" 00:12:48.422 ], 00:12:48.422 "product_name": "Malloc disk", 00:12:48.422 "block_size": 512, 00:12:48.422 "num_blocks": 65536, 00:12:48.422 "uuid": "9ce5fd27-1c40-4f41-94e9-54ec6ba2b8e2", 00:12:48.422 "assigned_rate_limits": { 00:12:48.422 "rw_ios_per_sec": 0, 00:12:48.422 "rw_mbytes_per_sec": 0, 00:12:48.422 "r_mbytes_per_sec": 0, 00:12:48.422 "w_mbytes_per_sec": 0 00:12:48.422 }, 00:12:48.422 "claimed": true, 00:12:48.422 "claim_type": "exclusive_write", 00:12:48.422 "zoned": false, 00:12:48.422 "supported_io_types": { 00:12:48.422 "read": true, 00:12:48.422 "write": true, 00:12:48.422 "unmap": true, 00:12:48.422 "flush": true, 00:12:48.422 "reset": true, 00:12:48.422 "nvme_admin": false, 00:12:48.422 "nvme_io": false, 00:12:48.422 "nvme_io_md": false, 00:12:48.422 "write_zeroes": true, 00:12:48.422 "zcopy": true, 00:12:48.422 "get_zone_info": false, 00:12:48.422 "zone_management": false, 00:12:48.422 "zone_append": false, 00:12:48.422 "compare": false, 00:12:48.422 "compare_and_write": false, 00:12:48.422 "abort": true, 00:12:48.422 "seek_hole": false, 00:12:48.422 "seek_data": false, 00:12:48.422 "copy": true, 00:12:48.422 "nvme_iov_md": false 00:12:48.422 }, 00:12:48.422 "memory_domains": [ 00:12:48.422 { 00:12:48.422 "dma_device_id": "system", 00:12:48.422 "dma_device_type": 1 00:12:48.422 }, 00:12:48.422 { 00:12:48.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.422 "dma_device_type": 2 00:12:48.422 } 00:12:48.422 ], 00:12:48.422 "driver_specific": {} 00:12:48.422 } 00:12:48.422 ] 00:12:48.422 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.422 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:48.422 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:48.422 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:48.422 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:48.422 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.422 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.422 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:48.422 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:48.422 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:48.422 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.422 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.422 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.422 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.422 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.422 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.422 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.422 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.422 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.422 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.422 "name": "Existed_Raid", 00:12:48.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.422 "strip_size_kb": 64, 00:12:48.422 "state": "configuring", 00:12:48.422 "raid_level": "concat", 00:12:48.422 "superblock": false, 00:12:48.422 "num_base_bdevs": 4, 00:12:48.422 "num_base_bdevs_discovered": 3, 00:12:48.422 "num_base_bdevs_operational": 4, 00:12:48.422 "base_bdevs_list": [ 00:12:48.422 { 00:12:48.422 "name": "BaseBdev1", 00:12:48.422 "uuid": "178be5d4-24ae-4031-82d6-6c319bc8d8cb", 00:12:48.422 "is_configured": true, 00:12:48.422 "data_offset": 0, 00:12:48.422 "data_size": 65536 00:12:48.422 }, 00:12:48.422 { 00:12:48.422 "name": "BaseBdev2", 00:12:48.422 "uuid": "dd40b109-3070-449a-86c2-90960a48fe20", 00:12:48.422 "is_configured": true, 00:12:48.422 "data_offset": 0, 00:12:48.422 "data_size": 65536 00:12:48.422 }, 00:12:48.422 { 00:12:48.423 "name": "BaseBdev3", 00:12:48.423 "uuid": "9ce5fd27-1c40-4f41-94e9-54ec6ba2b8e2", 00:12:48.423 "is_configured": true, 00:12:48.423 "data_offset": 0, 00:12:48.423 "data_size": 65536 00:12:48.423 }, 00:12:48.423 { 00:12:48.423 "name": "BaseBdev4", 00:12:48.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.423 "is_configured": false, 00:12:48.423 "data_offset": 0, 00:12:48.423 "data_size": 0 00:12:48.423 } 00:12:48.423 ] 00:12:48.423 }' 00:12:48.423 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.423 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.997 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:48.997 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.997 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.997 [2024-11-05 16:27:01.843566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:48.998 [2024-11-05 16:27:01.843746] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:48.998 [2024-11-05 16:27:01.843776] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:48.998 [2024-11-05 16:27:01.844145] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:48.998 [2024-11-05 16:27:01.844383] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:48.998 [2024-11-05 16:27:01.844437] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:48.998 [2024-11-05 16:27:01.844883] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:48.998 BaseBdev4 00:12:48.998 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.998 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:48.998 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:48.998 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:48.998 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:48.998 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:48.998 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:48.998 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:48.998 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.998 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.998 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.998 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:48.998 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.998 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.998 [ 00:12:48.998 { 00:12:48.998 "name": "BaseBdev4", 00:12:48.998 "aliases": [ 00:12:48.998 "ad1f151c-bd5b-42e2-8725-376639d78ad4" 00:12:48.998 ], 00:12:48.998 "product_name": "Malloc disk", 00:12:48.998 "block_size": 512, 00:12:48.998 "num_blocks": 65536, 00:12:48.998 "uuid": "ad1f151c-bd5b-42e2-8725-376639d78ad4", 00:12:48.998 "assigned_rate_limits": { 00:12:48.998 "rw_ios_per_sec": 0, 00:12:48.998 "rw_mbytes_per_sec": 0, 00:12:48.998 "r_mbytes_per_sec": 0, 00:12:48.998 "w_mbytes_per_sec": 0 00:12:48.998 }, 00:12:48.998 "claimed": true, 00:12:48.998 "claim_type": "exclusive_write", 00:12:48.998 "zoned": false, 00:12:48.998 "supported_io_types": { 00:12:48.998 "read": true, 00:12:48.998 "write": true, 00:12:48.998 "unmap": true, 00:12:48.998 "flush": true, 00:12:48.998 "reset": true, 00:12:48.998 "nvme_admin": false, 00:12:48.998 "nvme_io": false, 00:12:48.998 "nvme_io_md": false, 00:12:48.998 "write_zeroes": true, 00:12:48.998 "zcopy": true, 00:12:48.998 "get_zone_info": false, 00:12:48.998 "zone_management": false, 00:12:48.998 "zone_append": false, 00:12:48.998 "compare": false, 00:12:48.998 "compare_and_write": false, 00:12:48.998 "abort": true, 00:12:48.998 "seek_hole": false, 00:12:48.998 "seek_data": false, 00:12:48.998 "copy": true, 00:12:48.998 "nvme_iov_md": false 00:12:48.998 }, 00:12:48.998 "memory_domains": [ 00:12:48.998 { 00:12:48.998 "dma_device_id": "system", 00:12:48.998 "dma_device_type": 1 00:12:48.998 }, 00:12:48.998 { 00:12:48.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.998 "dma_device_type": 2 00:12:48.998 } 00:12:48.998 ], 00:12:48.998 "driver_specific": {} 00:12:48.998 } 00:12:48.998 ] 00:12:48.998 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.998 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:48.998 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:48.998 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:48.998 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:48.998 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.998 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:48.998 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:48.998 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:48.998 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:48.998 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.998 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.998 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.998 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.998 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.998 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.998 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.998 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.998 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.998 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.998 "name": "Existed_Raid", 00:12:48.998 "uuid": "57406094-cfcb-40e5-a623-3f7b8d7611e0", 00:12:48.998 "strip_size_kb": 64, 00:12:48.998 "state": "online", 00:12:48.998 "raid_level": "concat", 00:12:48.998 "superblock": false, 00:12:48.998 "num_base_bdevs": 4, 00:12:48.998 "num_base_bdevs_discovered": 4, 00:12:48.998 "num_base_bdevs_operational": 4, 00:12:48.998 "base_bdevs_list": [ 00:12:48.998 { 00:12:48.998 "name": "BaseBdev1", 00:12:48.998 "uuid": "178be5d4-24ae-4031-82d6-6c319bc8d8cb", 00:12:48.998 "is_configured": true, 00:12:48.998 "data_offset": 0, 00:12:48.998 "data_size": 65536 00:12:48.998 }, 00:12:48.998 { 00:12:48.998 "name": "BaseBdev2", 00:12:48.998 "uuid": "dd40b109-3070-449a-86c2-90960a48fe20", 00:12:48.998 "is_configured": true, 00:12:48.998 "data_offset": 0, 00:12:48.998 "data_size": 65536 00:12:48.998 }, 00:12:48.998 { 00:12:48.998 "name": "BaseBdev3", 00:12:48.998 "uuid": "9ce5fd27-1c40-4f41-94e9-54ec6ba2b8e2", 00:12:48.998 "is_configured": true, 00:12:48.998 "data_offset": 0, 00:12:48.998 "data_size": 65536 00:12:48.998 }, 00:12:48.998 { 00:12:48.998 "name": "BaseBdev4", 00:12:48.998 "uuid": "ad1f151c-bd5b-42e2-8725-376639d78ad4", 00:12:48.998 "is_configured": true, 00:12:48.998 "data_offset": 0, 00:12:48.998 "data_size": 65536 00:12:48.998 } 00:12:48.998 ] 00:12:48.998 }' 00:12:48.998 16:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.998 16:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.256 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:49.256 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:49.256 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:49.256 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:49.256 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:49.256 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:49.256 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:49.256 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:49.256 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.256 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.256 [2024-11-05 16:27:02.311226] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:49.256 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.515 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:49.515 "name": "Existed_Raid", 00:12:49.515 "aliases": [ 00:12:49.515 "57406094-cfcb-40e5-a623-3f7b8d7611e0" 00:12:49.515 ], 00:12:49.515 "product_name": "Raid Volume", 00:12:49.515 "block_size": 512, 00:12:49.515 "num_blocks": 262144, 00:12:49.515 "uuid": "57406094-cfcb-40e5-a623-3f7b8d7611e0", 00:12:49.515 "assigned_rate_limits": { 00:12:49.515 "rw_ios_per_sec": 0, 00:12:49.515 "rw_mbytes_per_sec": 0, 00:12:49.515 "r_mbytes_per_sec": 0, 00:12:49.515 "w_mbytes_per_sec": 0 00:12:49.515 }, 00:12:49.515 "claimed": false, 00:12:49.515 "zoned": false, 00:12:49.515 "supported_io_types": { 00:12:49.515 "read": true, 00:12:49.515 "write": true, 00:12:49.515 "unmap": true, 00:12:49.515 "flush": true, 00:12:49.515 "reset": true, 00:12:49.515 "nvme_admin": false, 00:12:49.515 "nvme_io": false, 00:12:49.515 "nvme_io_md": false, 00:12:49.515 "write_zeroes": true, 00:12:49.515 "zcopy": false, 00:12:49.515 "get_zone_info": false, 00:12:49.515 "zone_management": false, 00:12:49.515 "zone_append": false, 00:12:49.515 "compare": false, 00:12:49.515 "compare_and_write": false, 00:12:49.515 "abort": false, 00:12:49.515 "seek_hole": false, 00:12:49.515 "seek_data": false, 00:12:49.515 "copy": false, 00:12:49.515 "nvme_iov_md": false 00:12:49.515 }, 00:12:49.515 "memory_domains": [ 00:12:49.515 { 00:12:49.515 "dma_device_id": "system", 00:12:49.515 "dma_device_type": 1 00:12:49.515 }, 00:12:49.515 { 00:12:49.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.515 "dma_device_type": 2 00:12:49.515 }, 00:12:49.515 { 00:12:49.515 "dma_device_id": "system", 00:12:49.515 "dma_device_type": 1 00:12:49.515 }, 00:12:49.515 { 00:12:49.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.515 "dma_device_type": 2 00:12:49.515 }, 00:12:49.515 { 00:12:49.515 "dma_device_id": "system", 00:12:49.515 "dma_device_type": 1 00:12:49.515 }, 00:12:49.515 { 00:12:49.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.515 "dma_device_type": 2 00:12:49.515 }, 00:12:49.515 { 00:12:49.515 "dma_device_id": "system", 00:12:49.515 "dma_device_type": 1 00:12:49.515 }, 00:12:49.515 { 00:12:49.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.515 "dma_device_type": 2 00:12:49.515 } 00:12:49.515 ], 00:12:49.515 "driver_specific": { 00:12:49.515 "raid": { 00:12:49.515 "uuid": "57406094-cfcb-40e5-a623-3f7b8d7611e0", 00:12:49.515 "strip_size_kb": 64, 00:12:49.515 "state": "online", 00:12:49.515 "raid_level": "concat", 00:12:49.515 "superblock": false, 00:12:49.515 "num_base_bdevs": 4, 00:12:49.515 "num_base_bdevs_discovered": 4, 00:12:49.515 "num_base_bdevs_operational": 4, 00:12:49.515 "base_bdevs_list": [ 00:12:49.515 { 00:12:49.515 "name": "BaseBdev1", 00:12:49.515 "uuid": "178be5d4-24ae-4031-82d6-6c319bc8d8cb", 00:12:49.515 "is_configured": true, 00:12:49.515 "data_offset": 0, 00:12:49.515 "data_size": 65536 00:12:49.515 }, 00:12:49.515 { 00:12:49.515 "name": "BaseBdev2", 00:12:49.515 "uuid": "dd40b109-3070-449a-86c2-90960a48fe20", 00:12:49.515 "is_configured": true, 00:12:49.515 "data_offset": 0, 00:12:49.515 "data_size": 65536 00:12:49.515 }, 00:12:49.515 { 00:12:49.515 "name": "BaseBdev3", 00:12:49.515 "uuid": "9ce5fd27-1c40-4f41-94e9-54ec6ba2b8e2", 00:12:49.515 "is_configured": true, 00:12:49.515 "data_offset": 0, 00:12:49.516 "data_size": 65536 00:12:49.516 }, 00:12:49.516 { 00:12:49.516 "name": "BaseBdev4", 00:12:49.516 "uuid": "ad1f151c-bd5b-42e2-8725-376639d78ad4", 00:12:49.516 "is_configured": true, 00:12:49.516 "data_offset": 0, 00:12:49.516 "data_size": 65536 00:12:49.516 } 00:12:49.516 ] 00:12:49.516 } 00:12:49.516 } 00:12:49.516 }' 00:12:49.516 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:49.516 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:49.516 BaseBdev2 00:12:49.516 BaseBdev3 00:12:49.516 BaseBdev4' 00:12:49.516 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:49.516 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:49.516 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:49.516 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:49.516 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:49.516 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.516 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.516 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.516 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:49.516 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:49.516 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:49.516 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:49.516 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:49.516 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.516 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.516 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.516 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:49.516 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:49.516 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:49.516 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:49.516 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:49.516 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.516 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.516 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.775 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:49.775 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:49.775 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:49.775 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:49.775 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:49.775 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.775 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.775 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.775 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:49.775 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:49.775 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:49.775 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.775 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.775 [2024-11-05 16:27:02.666351] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:49.775 [2024-11-05 16:27:02.666384] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:49.775 [2024-11-05 16:27:02.666436] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:49.775 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.775 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:49.775 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:49.775 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:49.775 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:49.775 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:49.775 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:12:49.775 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.775 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:49.775 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:49.775 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.775 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:49.775 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.775 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.775 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.775 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.775 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.775 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.775 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.775 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.775 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.775 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.775 "name": "Existed_Raid", 00:12:49.775 "uuid": "57406094-cfcb-40e5-a623-3f7b8d7611e0", 00:12:49.775 "strip_size_kb": 64, 00:12:49.775 "state": "offline", 00:12:49.775 "raid_level": "concat", 00:12:49.775 "superblock": false, 00:12:49.775 "num_base_bdevs": 4, 00:12:49.775 "num_base_bdevs_discovered": 3, 00:12:49.775 "num_base_bdevs_operational": 3, 00:12:49.775 "base_bdevs_list": [ 00:12:49.775 { 00:12:49.775 "name": null, 00:12:49.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.775 "is_configured": false, 00:12:49.775 "data_offset": 0, 00:12:49.775 "data_size": 65536 00:12:49.775 }, 00:12:49.775 { 00:12:49.775 "name": "BaseBdev2", 00:12:49.775 "uuid": "dd40b109-3070-449a-86c2-90960a48fe20", 00:12:49.775 "is_configured": true, 00:12:49.775 "data_offset": 0, 00:12:49.775 "data_size": 65536 00:12:49.775 }, 00:12:49.775 { 00:12:49.775 "name": "BaseBdev3", 00:12:49.775 "uuid": "9ce5fd27-1c40-4f41-94e9-54ec6ba2b8e2", 00:12:49.775 "is_configured": true, 00:12:49.775 "data_offset": 0, 00:12:49.775 "data_size": 65536 00:12:49.775 }, 00:12:49.775 { 00:12:49.775 "name": "BaseBdev4", 00:12:49.775 "uuid": "ad1f151c-bd5b-42e2-8725-376639d78ad4", 00:12:49.775 "is_configured": true, 00:12:49.775 "data_offset": 0, 00:12:49.775 "data_size": 65536 00:12:49.775 } 00:12:49.775 ] 00:12:49.775 }' 00:12:49.775 16:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.775 16:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.342 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:50.342 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:50.342 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.342 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:50.342 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.342 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.342 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.342 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:50.342 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:50.342 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:50.342 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.342 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.342 [2024-11-05 16:27:03.260872] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:50.342 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.342 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:50.342 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:50.342 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.342 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.342 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:50.342 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.342 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.342 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:50.343 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:50.343 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:50.343 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.343 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.601 [2024-11-05 16:27:03.433155] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:50.601 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.601 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:50.601 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:50.601 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:50.601 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.601 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.601 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.601 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.601 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:50.601 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:50.601 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:50.601 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.601 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.601 [2024-11-05 16:27:03.609168] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:50.601 [2024-11-05 16:27:03.609314] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.861 BaseBdev2 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.861 [ 00:12:50.861 { 00:12:50.861 "name": "BaseBdev2", 00:12:50.861 "aliases": [ 00:12:50.861 "2bd68517-61b1-466c-b326-2526139295a6" 00:12:50.861 ], 00:12:50.861 "product_name": "Malloc disk", 00:12:50.861 "block_size": 512, 00:12:50.861 "num_blocks": 65536, 00:12:50.861 "uuid": "2bd68517-61b1-466c-b326-2526139295a6", 00:12:50.861 "assigned_rate_limits": { 00:12:50.861 "rw_ios_per_sec": 0, 00:12:50.861 "rw_mbytes_per_sec": 0, 00:12:50.861 "r_mbytes_per_sec": 0, 00:12:50.861 "w_mbytes_per_sec": 0 00:12:50.861 }, 00:12:50.861 "claimed": false, 00:12:50.861 "zoned": false, 00:12:50.861 "supported_io_types": { 00:12:50.861 "read": true, 00:12:50.861 "write": true, 00:12:50.861 "unmap": true, 00:12:50.861 "flush": true, 00:12:50.861 "reset": true, 00:12:50.861 "nvme_admin": false, 00:12:50.861 "nvme_io": false, 00:12:50.861 "nvme_io_md": false, 00:12:50.861 "write_zeroes": true, 00:12:50.861 "zcopy": true, 00:12:50.861 "get_zone_info": false, 00:12:50.861 "zone_management": false, 00:12:50.861 "zone_append": false, 00:12:50.861 "compare": false, 00:12:50.861 "compare_and_write": false, 00:12:50.861 "abort": true, 00:12:50.861 "seek_hole": false, 00:12:50.861 "seek_data": false, 00:12:50.861 "copy": true, 00:12:50.861 "nvme_iov_md": false 00:12:50.861 }, 00:12:50.861 "memory_domains": [ 00:12:50.861 { 00:12:50.861 "dma_device_id": "system", 00:12:50.861 "dma_device_type": 1 00:12:50.861 }, 00:12:50.861 { 00:12:50.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.861 "dma_device_type": 2 00:12:50.861 } 00:12:50.861 ], 00:12:50.861 "driver_specific": {} 00:12:50.861 } 00:12:50.861 ] 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.861 BaseBdev3 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.861 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.861 [ 00:12:50.861 { 00:12:50.861 "name": "BaseBdev3", 00:12:50.861 "aliases": [ 00:12:50.861 "96b48ed8-7c08-4595-a8b5-409f17d7ecff" 00:12:50.861 ], 00:12:50.861 "product_name": "Malloc disk", 00:12:50.861 "block_size": 512, 00:12:50.861 "num_blocks": 65536, 00:12:50.861 "uuid": "96b48ed8-7c08-4595-a8b5-409f17d7ecff", 00:12:50.861 "assigned_rate_limits": { 00:12:50.861 "rw_ios_per_sec": 0, 00:12:50.861 "rw_mbytes_per_sec": 0, 00:12:50.861 "r_mbytes_per_sec": 0, 00:12:50.861 "w_mbytes_per_sec": 0 00:12:50.861 }, 00:12:50.861 "claimed": false, 00:12:50.861 "zoned": false, 00:12:50.861 "supported_io_types": { 00:12:50.861 "read": true, 00:12:50.861 "write": true, 00:12:50.861 "unmap": true, 00:12:50.861 "flush": true, 00:12:50.861 "reset": true, 00:12:50.861 "nvme_admin": false, 00:12:50.861 "nvme_io": false, 00:12:50.861 "nvme_io_md": false, 00:12:50.861 "write_zeroes": true, 00:12:50.861 "zcopy": true, 00:12:50.861 "get_zone_info": false, 00:12:50.861 "zone_management": false, 00:12:50.861 "zone_append": false, 00:12:50.861 "compare": false, 00:12:50.861 "compare_and_write": false, 00:12:50.861 "abort": true, 00:12:50.861 "seek_hole": false, 00:12:50.861 "seek_data": false, 00:12:50.861 "copy": true, 00:12:50.861 "nvme_iov_md": false 00:12:50.861 }, 00:12:50.861 "memory_domains": [ 00:12:50.861 { 00:12:50.861 "dma_device_id": "system", 00:12:51.120 "dma_device_type": 1 00:12:51.120 }, 00:12:51.120 { 00:12:51.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.120 "dma_device_type": 2 00:12:51.120 } 00:12:51.120 ], 00:12:51.120 "driver_specific": {} 00:12:51.120 } 00:12:51.120 ] 00:12:51.120 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.120 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:51.120 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:51.120 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:51.120 16:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:51.120 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.120 16:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.120 BaseBdev4 00:12:51.120 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.120 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:51.120 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:51.120 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:51.120 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:51.120 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:51.120 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:51.120 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:51.121 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.121 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.121 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.121 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:51.121 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.121 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.121 [ 00:12:51.121 { 00:12:51.121 "name": "BaseBdev4", 00:12:51.121 "aliases": [ 00:12:51.121 "84993ca3-1ede-432f-82f3-bbcc68866b88" 00:12:51.121 ], 00:12:51.121 "product_name": "Malloc disk", 00:12:51.121 "block_size": 512, 00:12:51.121 "num_blocks": 65536, 00:12:51.121 "uuid": "84993ca3-1ede-432f-82f3-bbcc68866b88", 00:12:51.121 "assigned_rate_limits": { 00:12:51.121 "rw_ios_per_sec": 0, 00:12:51.121 "rw_mbytes_per_sec": 0, 00:12:51.121 "r_mbytes_per_sec": 0, 00:12:51.121 "w_mbytes_per_sec": 0 00:12:51.121 }, 00:12:51.121 "claimed": false, 00:12:51.121 "zoned": false, 00:12:51.121 "supported_io_types": { 00:12:51.121 "read": true, 00:12:51.121 "write": true, 00:12:51.121 "unmap": true, 00:12:51.121 "flush": true, 00:12:51.121 "reset": true, 00:12:51.121 "nvme_admin": false, 00:12:51.121 "nvme_io": false, 00:12:51.121 "nvme_io_md": false, 00:12:51.121 "write_zeroes": true, 00:12:51.121 "zcopy": true, 00:12:51.121 "get_zone_info": false, 00:12:51.121 "zone_management": false, 00:12:51.121 "zone_append": false, 00:12:51.121 "compare": false, 00:12:51.121 "compare_and_write": false, 00:12:51.121 "abort": true, 00:12:51.121 "seek_hole": false, 00:12:51.121 "seek_data": false, 00:12:51.121 "copy": true, 00:12:51.121 "nvme_iov_md": false 00:12:51.121 }, 00:12:51.121 "memory_domains": [ 00:12:51.121 { 00:12:51.121 "dma_device_id": "system", 00:12:51.121 "dma_device_type": 1 00:12:51.121 }, 00:12:51.121 { 00:12:51.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.121 "dma_device_type": 2 00:12:51.121 } 00:12:51.121 ], 00:12:51.121 "driver_specific": {} 00:12:51.121 } 00:12:51.121 ] 00:12:51.121 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.121 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:51.121 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:51.121 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:51.121 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:51.121 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.121 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.121 [2024-11-05 16:27:04.038024] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:51.121 [2024-11-05 16:27:04.038139] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:51.121 [2024-11-05 16:27:04.038204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:51.121 [2024-11-05 16:27:04.040430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:51.121 [2024-11-05 16:27:04.040575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:51.121 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.121 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:51.121 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.121 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:51.121 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:51.121 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:51.121 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:51.121 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.121 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.121 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.121 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.121 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.121 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.121 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.121 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.121 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.121 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.121 "name": "Existed_Raid", 00:12:51.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.121 "strip_size_kb": 64, 00:12:51.121 "state": "configuring", 00:12:51.121 "raid_level": "concat", 00:12:51.121 "superblock": false, 00:12:51.121 "num_base_bdevs": 4, 00:12:51.121 "num_base_bdevs_discovered": 3, 00:12:51.121 "num_base_bdevs_operational": 4, 00:12:51.121 "base_bdevs_list": [ 00:12:51.121 { 00:12:51.121 "name": "BaseBdev1", 00:12:51.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.121 "is_configured": false, 00:12:51.121 "data_offset": 0, 00:12:51.121 "data_size": 0 00:12:51.121 }, 00:12:51.121 { 00:12:51.121 "name": "BaseBdev2", 00:12:51.121 "uuid": "2bd68517-61b1-466c-b326-2526139295a6", 00:12:51.121 "is_configured": true, 00:12:51.121 "data_offset": 0, 00:12:51.121 "data_size": 65536 00:12:51.121 }, 00:12:51.121 { 00:12:51.121 "name": "BaseBdev3", 00:12:51.121 "uuid": "96b48ed8-7c08-4595-a8b5-409f17d7ecff", 00:12:51.121 "is_configured": true, 00:12:51.121 "data_offset": 0, 00:12:51.121 "data_size": 65536 00:12:51.121 }, 00:12:51.121 { 00:12:51.121 "name": "BaseBdev4", 00:12:51.121 "uuid": "84993ca3-1ede-432f-82f3-bbcc68866b88", 00:12:51.121 "is_configured": true, 00:12:51.121 "data_offset": 0, 00:12:51.121 "data_size": 65536 00:12:51.121 } 00:12:51.121 ] 00:12:51.121 }' 00:12:51.121 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.121 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.380 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:51.380 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.380 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.380 [2024-11-05 16:27:04.457361] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:51.380 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.380 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:51.380 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.380 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:51.380 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:51.380 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:51.380 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:51.380 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.380 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.380 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.380 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.380 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.639 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.639 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.639 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.639 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.639 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.639 "name": "Existed_Raid", 00:12:51.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.639 "strip_size_kb": 64, 00:12:51.639 "state": "configuring", 00:12:51.639 "raid_level": "concat", 00:12:51.639 "superblock": false, 00:12:51.639 "num_base_bdevs": 4, 00:12:51.639 "num_base_bdevs_discovered": 2, 00:12:51.639 "num_base_bdevs_operational": 4, 00:12:51.639 "base_bdevs_list": [ 00:12:51.639 { 00:12:51.639 "name": "BaseBdev1", 00:12:51.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.639 "is_configured": false, 00:12:51.639 "data_offset": 0, 00:12:51.639 "data_size": 0 00:12:51.639 }, 00:12:51.639 { 00:12:51.639 "name": null, 00:12:51.639 "uuid": "2bd68517-61b1-466c-b326-2526139295a6", 00:12:51.639 "is_configured": false, 00:12:51.639 "data_offset": 0, 00:12:51.639 "data_size": 65536 00:12:51.639 }, 00:12:51.639 { 00:12:51.639 "name": "BaseBdev3", 00:12:51.639 "uuid": "96b48ed8-7c08-4595-a8b5-409f17d7ecff", 00:12:51.639 "is_configured": true, 00:12:51.639 "data_offset": 0, 00:12:51.639 "data_size": 65536 00:12:51.639 }, 00:12:51.639 { 00:12:51.639 "name": "BaseBdev4", 00:12:51.639 "uuid": "84993ca3-1ede-432f-82f3-bbcc68866b88", 00:12:51.639 "is_configured": true, 00:12:51.639 "data_offset": 0, 00:12:51.639 "data_size": 65536 00:12:51.639 } 00:12:51.639 ] 00:12:51.639 }' 00:12:51.639 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.639 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.899 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.899 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.899 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:51.899 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.899 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.158 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:52.158 16:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:52.158 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.158 16:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.158 [2024-11-05 16:27:05.037147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:52.158 BaseBdev1 00:12:52.158 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.158 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:52.158 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:52.158 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:52.158 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:52.158 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:52.158 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:52.158 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:52.158 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.158 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.158 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.158 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:52.158 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.158 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.158 [ 00:12:52.158 { 00:12:52.158 "name": "BaseBdev1", 00:12:52.158 "aliases": [ 00:12:52.158 "15e89a21-5b46-4b9e-b16e-0b746d4bbf22" 00:12:52.158 ], 00:12:52.158 "product_name": "Malloc disk", 00:12:52.158 "block_size": 512, 00:12:52.158 "num_blocks": 65536, 00:12:52.158 "uuid": "15e89a21-5b46-4b9e-b16e-0b746d4bbf22", 00:12:52.158 "assigned_rate_limits": { 00:12:52.158 "rw_ios_per_sec": 0, 00:12:52.158 "rw_mbytes_per_sec": 0, 00:12:52.158 "r_mbytes_per_sec": 0, 00:12:52.158 "w_mbytes_per_sec": 0 00:12:52.158 }, 00:12:52.158 "claimed": true, 00:12:52.158 "claim_type": "exclusive_write", 00:12:52.158 "zoned": false, 00:12:52.158 "supported_io_types": { 00:12:52.158 "read": true, 00:12:52.158 "write": true, 00:12:52.158 "unmap": true, 00:12:52.158 "flush": true, 00:12:52.158 "reset": true, 00:12:52.158 "nvme_admin": false, 00:12:52.158 "nvme_io": false, 00:12:52.158 "nvme_io_md": false, 00:12:52.158 "write_zeroes": true, 00:12:52.158 "zcopy": true, 00:12:52.158 "get_zone_info": false, 00:12:52.158 "zone_management": false, 00:12:52.158 "zone_append": false, 00:12:52.158 "compare": false, 00:12:52.158 "compare_and_write": false, 00:12:52.158 "abort": true, 00:12:52.158 "seek_hole": false, 00:12:52.158 "seek_data": false, 00:12:52.158 "copy": true, 00:12:52.158 "nvme_iov_md": false 00:12:52.158 }, 00:12:52.158 "memory_domains": [ 00:12:52.158 { 00:12:52.158 "dma_device_id": "system", 00:12:52.158 "dma_device_type": 1 00:12:52.158 }, 00:12:52.158 { 00:12:52.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.158 "dma_device_type": 2 00:12:52.158 } 00:12:52.158 ], 00:12:52.158 "driver_specific": {} 00:12:52.158 } 00:12:52.158 ] 00:12:52.158 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.158 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:52.158 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:52.158 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.158 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.158 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:52.158 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.158 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:52.158 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.158 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.158 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.158 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.158 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.158 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.158 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.158 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.158 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.158 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.158 "name": "Existed_Raid", 00:12:52.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.158 "strip_size_kb": 64, 00:12:52.158 "state": "configuring", 00:12:52.158 "raid_level": "concat", 00:12:52.158 "superblock": false, 00:12:52.158 "num_base_bdevs": 4, 00:12:52.158 "num_base_bdevs_discovered": 3, 00:12:52.158 "num_base_bdevs_operational": 4, 00:12:52.158 "base_bdevs_list": [ 00:12:52.158 { 00:12:52.158 "name": "BaseBdev1", 00:12:52.158 "uuid": "15e89a21-5b46-4b9e-b16e-0b746d4bbf22", 00:12:52.158 "is_configured": true, 00:12:52.158 "data_offset": 0, 00:12:52.158 "data_size": 65536 00:12:52.158 }, 00:12:52.158 { 00:12:52.158 "name": null, 00:12:52.158 "uuid": "2bd68517-61b1-466c-b326-2526139295a6", 00:12:52.158 "is_configured": false, 00:12:52.158 "data_offset": 0, 00:12:52.158 "data_size": 65536 00:12:52.158 }, 00:12:52.158 { 00:12:52.158 "name": "BaseBdev3", 00:12:52.158 "uuid": "96b48ed8-7c08-4595-a8b5-409f17d7ecff", 00:12:52.158 "is_configured": true, 00:12:52.158 "data_offset": 0, 00:12:52.158 "data_size": 65536 00:12:52.158 }, 00:12:52.158 { 00:12:52.158 "name": "BaseBdev4", 00:12:52.158 "uuid": "84993ca3-1ede-432f-82f3-bbcc68866b88", 00:12:52.158 "is_configured": true, 00:12:52.158 "data_offset": 0, 00:12:52.158 "data_size": 65536 00:12:52.158 } 00:12:52.158 ] 00:12:52.158 }' 00:12:52.158 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.158 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.725 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.725 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.725 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.725 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:52.726 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.726 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:52.726 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:52.726 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.726 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.726 [2024-11-05 16:27:05.596400] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:52.726 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.726 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:52.726 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.726 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.726 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:52.726 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.726 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:52.726 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.726 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.726 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.726 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.726 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.726 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.726 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.726 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.726 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.726 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.726 "name": "Existed_Raid", 00:12:52.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.726 "strip_size_kb": 64, 00:12:52.726 "state": "configuring", 00:12:52.726 "raid_level": "concat", 00:12:52.726 "superblock": false, 00:12:52.726 "num_base_bdevs": 4, 00:12:52.726 "num_base_bdevs_discovered": 2, 00:12:52.726 "num_base_bdevs_operational": 4, 00:12:52.726 "base_bdevs_list": [ 00:12:52.726 { 00:12:52.726 "name": "BaseBdev1", 00:12:52.726 "uuid": "15e89a21-5b46-4b9e-b16e-0b746d4bbf22", 00:12:52.726 "is_configured": true, 00:12:52.726 "data_offset": 0, 00:12:52.726 "data_size": 65536 00:12:52.726 }, 00:12:52.726 { 00:12:52.726 "name": null, 00:12:52.726 "uuid": "2bd68517-61b1-466c-b326-2526139295a6", 00:12:52.726 "is_configured": false, 00:12:52.726 "data_offset": 0, 00:12:52.726 "data_size": 65536 00:12:52.726 }, 00:12:52.726 { 00:12:52.726 "name": null, 00:12:52.726 "uuid": "96b48ed8-7c08-4595-a8b5-409f17d7ecff", 00:12:52.726 "is_configured": false, 00:12:52.726 "data_offset": 0, 00:12:52.726 "data_size": 65536 00:12:52.726 }, 00:12:52.726 { 00:12:52.726 "name": "BaseBdev4", 00:12:52.726 "uuid": "84993ca3-1ede-432f-82f3-bbcc68866b88", 00:12:52.726 "is_configured": true, 00:12:52.726 "data_offset": 0, 00:12:52.726 "data_size": 65536 00:12:52.726 } 00:12:52.726 ] 00:12:52.726 }' 00:12:52.726 16:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.726 16:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.984 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.984 16:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.984 16:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.984 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:52.984 16:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.244 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:53.244 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:53.244 16:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.244 16:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.244 [2024-11-05 16:27:06.087500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:53.244 16:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.244 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:53.244 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.244 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:53.244 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:53.244 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:53.244 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:53.244 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.244 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.244 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.244 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.244 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.244 16:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.244 16:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.244 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.244 16:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.244 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.244 "name": "Existed_Raid", 00:12:53.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.244 "strip_size_kb": 64, 00:12:53.244 "state": "configuring", 00:12:53.244 "raid_level": "concat", 00:12:53.244 "superblock": false, 00:12:53.244 "num_base_bdevs": 4, 00:12:53.244 "num_base_bdevs_discovered": 3, 00:12:53.244 "num_base_bdevs_operational": 4, 00:12:53.244 "base_bdevs_list": [ 00:12:53.244 { 00:12:53.244 "name": "BaseBdev1", 00:12:53.244 "uuid": "15e89a21-5b46-4b9e-b16e-0b746d4bbf22", 00:12:53.244 "is_configured": true, 00:12:53.244 "data_offset": 0, 00:12:53.244 "data_size": 65536 00:12:53.244 }, 00:12:53.244 { 00:12:53.244 "name": null, 00:12:53.244 "uuid": "2bd68517-61b1-466c-b326-2526139295a6", 00:12:53.244 "is_configured": false, 00:12:53.244 "data_offset": 0, 00:12:53.244 "data_size": 65536 00:12:53.244 }, 00:12:53.244 { 00:12:53.244 "name": "BaseBdev3", 00:12:53.244 "uuid": "96b48ed8-7c08-4595-a8b5-409f17d7ecff", 00:12:53.244 "is_configured": true, 00:12:53.244 "data_offset": 0, 00:12:53.244 "data_size": 65536 00:12:53.244 }, 00:12:53.244 { 00:12:53.244 "name": "BaseBdev4", 00:12:53.244 "uuid": "84993ca3-1ede-432f-82f3-bbcc68866b88", 00:12:53.244 "is_configured": true, 00:12:53.244 "data_offset": 0, 00:12:53.244 "data_size": 65536 00:12:53.244 } 00:12:53.244 ] 00:12:53.244 }' 00:12:53.244 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.244 16:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.501 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.501 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:53.501 16:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.501 16:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.501 16:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.501 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:53.501 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:53.501 16:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.501 16:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.759 [2024-11-05 16:27:06.594672] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:53.759 16:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.759 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:53.759 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.759 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:53.759 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:53.759 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:53.759 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:53.759 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.759 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.759 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.759 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.759 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.759 16:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.759 16:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.759 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.759 16:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.759 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.759 "name": "Existed_Raid", 00:12:53.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.759 "strip_size_kb": 64, 00:12:53.759 "state": "configuring", 00:12:53.759 "raid_level": "concat", 00:12:53.759 "superblock": false, 00:12:53.759 "num_base_bdevs": 4, 00:12:53.759 "num_base_bdevs_discovered": 2, 00:12:53.759 "num_base_bdevs_operational": 4, 00:12:53.759 "base_bdevs_list": [ 00:12:53.759 { 00:12:53.759 "name": null, 00:12:53.759 "uuid": "15e89a21-5b46-4b9e-b16e-0b746d4bbf22", 00:12:53.759 "is_configured": false, 00:12:53.759 "data_offset": 0, 00:12:53.759 "data_size": 65536 00:12:53.759 }, 00:12:53.759 { 00:12:53.759 "name": null, 00:12:53.759 "uuid": "2bd68517-61b1-466c-b326-2526139295a6", 00:12:53.759 "is_configured": false, 00:12:53.759 "data_offset": 0, 00:12:53.759 "data_size": 65536 00:12:53.759 }, 00:12:53.759 { 00:12:53.759 "name": "BaseBdev3", 00:12:53.759 "uuid": "96b48ed8-7c08-4595-a8b5-409f17d7ecff", 00:12:53.759 "is_configured": true, 00:12:53.759 "data_offset": 0, 00:12:53.759 "data_size": 65536 00:12:53.759 }, 00:12:53.759 { 00:12:53.759 "name": "BaseBdev4", 00:12:53.759 "uuid": "84993ca3-1ede-432f-82f3-bbcc68866b88", 00:12:53.759 "is_configured": true, 00:12:53.759 "data_offset": 0, 00:12:53.759 "data_size": 65536 00:12:53.759 } 00:12:53.759 ] 00:12:53.759 }' 00:12:53.759 16:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.759 16:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.324 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.324 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:54.324 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.324 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.324 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.324 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:54.324 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:54.324 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.325 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.325 [2024-11-05 16:27:07.241891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:54.325 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.325 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:54.325 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:54.325 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:54.325 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:54.325 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:54.325 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:54.325 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.325 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.325 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.325 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.325 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.325 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.325 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.325 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.325 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.325 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.325 "name": "Existed_Raid", 00:12:54.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.325 "strip_size_kb": 64, 00:12:54.325 "state": "configuring", 00:12:54.325 "raid_level": "concat", 00:12:54.325 "superblock": false, 00:12:54.325 "num_base_bdevs": 4, 00:12:54.325 "num_base_bdevs_discovered": 3, 00:12:54.325 "num_base_bdevs_operational": 4, 00:12:54.325 "base_bdevs_list": [ 00:12:54.325 { 00:12:54.325 "name": null, 00:12:54.325 "uuid": "15e89a21-5b46-4b9e-b16e-0b746d4bbf22", 00:12:54.325 "is_configured": false, 00:12:54.325 "data_offset": 0, 00:12:54.325 "data_size": 65536 00:12:54.325 }, 00:12:54.325 { 00:12:54.325 "name": "BaseBdev2", 00:12:54.325 "uuid": "2bd68517-61b1-466c-b326-2526139295a6", 00:12:54.325 "is_configured": true, 00:12:54.325 "data_offset": 0, 00:12:54.325 "data_size": 65536 00:12:54.325 }, 00:12:54.325 { 00:12:54.325 "name": "BaseBdev3", 00:12:54.325 "uuid": "96b48ed8-7c08-4595-a8b5-409f17d7ecff", 00:12:54.325 "is_configured": true, 00:12:54.325 "data_offset": 0, 00:12:54.325 "data_size": 65536 00:12:54.325 }, 00:12:54.325 { 00:12:54.325 "name": "BaseBdev4", 00:12:54.325 "uuid": "84993ca3-1ede-432f-82f3-bbcc68866b88", 00:12:54.325 "is_configured": true, 00:12:54.325 "data_offset": 0, 00:12:54.325 "data_size": 65536 00:12:54.325 } 00:12:54.325 ] 00:12:54.325 }' 00:12:54.325 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.325 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 15e89a21-5b46-4b9e-b16e-0b746d4bbf22 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.891 [2024-11-05 16:27:07.790555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:54.891 [2024-11-05 16:27:07.790697] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:54.891 [2024-11-05 16:27:07.790723] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:54.891 [2024-11-05 16:27:07.791015] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:54.891 [2024-11-05 16:27:07.791209] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:54.891 [2024-11-05 16:27:07.791256] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:54.891 [2024-11-05 16:27:07.791563] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:54.891 NewBaseBdev 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.891 [ 00:12:54.891 { 00:12:54.891 "name": "NewBaseBdev", 00:12:54.891 "aliases": [ 00:12:54.891 "15e89a21-5b46-4b9e-b16e-0b746d4bbf22" 00:12:54.891 ], 00:12:54.891 "product_name": "Malloc disk", 00:12:54.891 "block_size": 512, 00:12:54.891 "num_blocks": 65536, 00:12:54.891 "uuid": "15e89a21-5b46-4b9e-b16e-0b746d4bbf22", 00:12:54.891 "assigned_rate_limits": { 00:12:54.891 "rw_ios_per_sec": 0, 00:12:54.891 "rw_mbytes_per_sec": 0, 00:12:54.891 "r_mbytes_per_sec": 0, 00:12:54.891 "w_mbytes_per_sec": 0 00:12:54.891 }, 00:12:54.891 "claimed": true, 00:12:54.891 "claim_type": "exclusive_write", 00:12:54.891 "zoned": false, 00:12:54.891 "supported_io_types": { 00:12:54.891 "read": true, 00:12:54.891 "write": true, 00:12:54.891 "unmap": true, 00:12:54.891 "flush": true, 00:12:54.891 "reset": true, 00:12:54.891 "nvme_admin": false, 00:12:54.891 "nvme_io": false, 00:12:54.891 "nvme_io_md": false, 00:12:54.891 "write_zeroes": true, 00:12:54.891 "zcopy": true, 00:12:54.891 "get_zone_info": false, 00:12:54.891 "zone_management": false, 00:12:54.891 "zone_append": false, 00:12:54.891 "compare": false, 00:12:54.891 "compare_and_write": false, 00:12:54.891 "abort": true, 00:12:54.891 "seek_hole": false, 00:12:54.891 "seek_data": false, 00:12:54.891 "copy": true, 00:12:54.891 "nvme_iov_md": false 00:12:54.891 }, 00:12:54.891 "memory_domains": [ 00:12:54.891 { 00:12:54.891 "dma_device_id": "system", 00:12:54.891 "dma_device_type": 1 00:12:54.891 }, 00:12:54.891 { 00:12:54.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.891 "dma_device_type": 2 00:12:54.891 } 00:12:54.891 ], 00:12:54.891 "driver_specific": {} 00:12:54.891 } 00:12:54.891 ] 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.891 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.891 "name": "Existed_Raid", 00:12:54.891 "uuid": "a24a2208-f361-49bc-b05c-2979ba761189", 00:12:54.891 "strip_size_kb": 64, 00:12:54.891 "state": "online", 00:12:54.891 "raid_level": "concat", 00:12:54.891 "superblock": false, 00:12:54.891 "num_base_bdevs": 4, 00:12:54.891 "num_base_bdevs_discovered": 4, 00:12:54.891 "num_base_bdevs_operational": 4, 00:12:54.891 "base_bdevs_list": [ 00:12:54.891 { 00:12:54.891 "name": "NewBaseBdev", 00:12:54.891 "uuid": "15e89a21-5b46-4b9e-b16e-0b746d4bbf22", 00:12:54.891 "is_configured": true, 00:12:54.891 "data_offset": 0, 00:12:54.892 "data_size": 65536 00:12:54.892 }, 00:12:54.892 { 00:12:54.892 "name": "BaseBdev2", 00:12:54.892 "uuid": "2bd68517-61b1-466c-b326-2526139295a6", 00:12:54.892 "is_configured": true, 00:12:54.892 "data_offset": 0, 00:12:54.892 "data_size": 65536 00:12:54.892 }, 00:12:54.892 { 00:12:54.892 "name": "BaseBdev3", 00:12:54.892 "uuid": "96b48ed8-7c08-4595-a8b5-409f17d7ecff", 00:12:54.892 "is_configured": true, 00:12:54.892 "data_offset": 0, 00:12:54.892 "data_size": 65536 00:12:54.892 }, 00:12:54.892 { 00:12:54.892 "name": "BaseBdev4", 00:12:54.892 "uuid": "84993ca3-1ede-432f-82f3-bbcc68866b88", 00:12:54.892 "is_configured": true, 00:12:54.892 "data_offset": 0, 00:12:54.892 "data_size": 65536 00:12:54.892 } 00:12:54.892 ] 00:12:54.892 }' 00:12:54.892 16:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.892 16:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.459 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:55.459 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:55.459 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:55.459 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:55.459 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:55.459 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:55.459 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:55.459 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:55.459 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.459 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.459 [2024-11-05 16:27:08.314151] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:55.459 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.459 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:55.459 "name": "Existed_Raid", 00:12:55.459 "aliases": [ 00:12:55.459 "a24a2208-f361-49bc-b05c-2979ba761189" 00:12:55.459 ], 00:12:55.459 "product_name": "Raid Volume", 00:12:55.459 "block_size": 512, 00:12:55.459 "num_blocks": 262144, 00:12:55.459 "uuid": "a24a2208-f361-49bc-b05c-2979ba761189", 00:12:55.459 "assigned_rate_limits": { 00:12:55.459 "rw_ios_per_sec": 0, 00:12:55.459 "rw_mbytes_per_sec": 0, 00:12:55.459 "r_mbytes_per_sec": 0, 00:12:55.459 "w_mbytes_per_sec": 0 00:12:55.459 }, 00:12:55.459 "claimed": false, 00:12:55.459 "zoned": false, 00:12:55.459 "supported_io_types": { 00:12:55.459 "read": true, 00:12:55.459 "write": true, 00:12:55.459 "unmap": true, 00:12:55.459 "flush": true, 00:12:55.459 "reset": true, 00:12:55.459 "nvme_admin": false, 00:12:55.459 "nvme_io": false, 00:12:55.459 "nvme_io_md": false, 00:12:55.459 "write_zeroes": true, 00:12:55.459 "zcopy": false, 00:12:55.459 "get_zone_info": false, 00:12:55.459 "zone_management": false, 00:12:55.459 "zone_append": false, 00:12:55.459 "compare": false, 00:12:55.459 "compare_and_write": false, 00:12:55.459 "abort": false, 00:12:55.459 "seek_hole": false, 00:12:55.459 "seek_data": false, 00:12:55.459 "copy": false, 00:12:55.459 "nvme_iov_md": false 00:12:55.459 }, 00:12:55.459 "memory_domains": [ 00:12:55.459 { 00:12:55.459 "dma_device_id": "system", 00:12:55.459 "dma_device_type": 1 00:12:55.459 }, 00:12:55.459 { 00:12:55.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.459 "dma_device_type": 2 00:12:55.459 }, 00:12:55.459 { 00:12:55.459 "dma_device_id": "system", 00:12:55.459 "dma_device_type": 1 00:12:55.459 }, 00:12:55.459 { 00:12:55.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.459 "dma_device_type": 2 00:12:55.459 }, 00:12:55.459 { 00:12:55.459 "dma_device_id": "system", 00:12:55.459 "dma_device_type": 1 00:12:55.459 }, 00:12:55.459 { 00:12:55.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.459 "dma_device_type": 2 00:12:55.459 }, 00:12:55.459 { 00:12:55.459 "dma_device_id": "system", 00:12:55.459 "dma_device_type": 1 00:12:55.459 }, 00:12:55.459 { 00:12:55.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.459 "dma_device_type": 2 00:12:55.459 } 00:12:55.459 ], 00:12:55.459 "driver_specific": { 00:12:55.459 "raid": { 00:12:55.459 "uuid": "a24a2208-f361-49bc-b05c-2979ba761189", 00:12:55.459 "strip_size_kb": 64, 00:12:55.459 "state": "online", 00:12:55.459 "raid_level": "concat", 00:12:55.459 "superblock": false, 00:12:55.459 "num_base_bdevs": 4, 00:12:55.459 "num_base_bdevs_discovered": 4, 00:12:55.459 "num_base_bdevs_operational": 4, 00:12:55.459 "base_bdevs_list": [ 00:12:55.459 { 00:12:55.459 "name": "NewBaseBdev", 00:12:55.459 "uuid": "15e89a21-5b46-4b9e-b16e-0b746d4bbf22", 00:12:55.459 "is_configured": true, 00:12:55.459 "data_offset": 0, 00:12:55.459 "data_size": 65536 00:12:55.459 }, 00:12:55.459 { 00:12:55.459 "name": "BaseBdev2", 00:12:55.459 "uuid": "2bd68517-61b1-466c-b326-2526139295a6", 00:12:55.459 "is_configured": true, 00:12:55.459 "data_offset": 0, 00:12:55.459 "data_size": 65536 00:12:55.459 }, 00:12:55.459 { 00:12:55.459 "name": "BaseBdev3", 00:12:55.459 "uuid": "96b48ed8-7c08-4595-a8b5-409f17d7ecff", 00:12:55.459 "is_configured": true, 00:12:55.459 "data_offset": 0, 00:12:55.459 "data_size": 65536 00:12:55.459 }, 00:12:55.459 { 00:12:55.459 "name": "BaseBdev4", 00:12:55.459 "uuid": "84993ca3-1ede-432f-82f3-bbcc68866b88", 00:12:55.459 "is_configured": true, 00:12:55.459 "data_offset": 0, 00:12:55.459 "data_size": 65536 00:12:55.459 } 00:12:55.459 ] 00:12:55.459 } 00:12:55.459 } 00:12:55.459 }' 00:12:55.459 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:55.459 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:55.459 BaseBdev2 00:12:55.459 BaseBdev3 00:12:55.459 BaseBdev4' 00:12:55.459 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.459 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:55.459 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.459 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.459 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:55.459 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.459 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.459 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.459 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.459 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.459 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.459 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:55.459 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.459 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.459 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.459 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.718 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.718 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.718 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.718 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.718 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:55.718 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.718 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.718 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.718 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.718 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.718 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.718 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:55.718 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.718 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.718 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.718 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.718 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.718 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.718 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:55.718 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.718 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.718 [2024-11-05 16:27:08.665163] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:55.718 [2024-11-05 16:27:08.665254] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:55.718 [2024-11-05 16:27:08.665380] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:55.718 [2024-11-05 16:27:08.665459] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:55.718 [2024-11-05 16:27:08.665471] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:55.718 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.718 16:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71586 00:12:55.718 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 71586 ']' 00:12:55.718 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 71586 00:12:55.718 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:12:55.718 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:55.718 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71586 00:12:55.718 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:55.718 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:55.718 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71586' 00:12:55.718 killing process with pid 71586 00:12:55.718 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 71586 00:12:55.718 [2024-11-05 16:27:08.708326] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:55.718 16:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 71586 00:12:56.286 [2024-11-05 16:27:09.155276] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:57.661 16:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:57.661 00:12:57.661 real 0m12.183s 00:12:57.661 user 0m19.317s 00:12:57.661 sys 0m2.141s 00:12:57.661 16:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:57.661 16:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.661 ************************************ 00:12:57.661 END TEST raid_state_function_test 00:12:57.662 ************************************ 00:12:57.662 16:27:10 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:12:57.662 16:27:10 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:57.662 16:27:10 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:57.662 16:27:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:57.662 ************************************ 00:12:57.662 START TEST raid_state_function_test_sb 00:12:57.662 ************************************ 00:12:57.662 16:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 4 true 00:12:57.662 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:57.662 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:57.662 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:57.662 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:57.662 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:57.662 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:57.662 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:57.662 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:57.662 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:57.662 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:57.662 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:57.662 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:57.662 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:57.662 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:57.662 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:57.662 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:57.662 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:57.662 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:57.662 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:57.662 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:57.662 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:57.662 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:57.662 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:57.662 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:57.662 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:57.662 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:57.662 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:57.662 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:57.662 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:57.662 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72263 00:12:57.662 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:57.662 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72263' 00:12:57.662 Process raid pid: 72263 00:12:57.662 16:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72263 00:12:57.662 16:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 72263 ']' 00:12:57.662 16:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.662 16:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:57.662 16:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.662 16:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:57.662 16:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.662 [2024-11-05 16:27:10.511985] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:12:57.662 [2024-11-05 16:27:10.512200] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.662 [2024-11-05 16:27:10.691650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.922 [2024-11-05 16:27:10.815442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.180 [2024-11-05 16:27:11.014471] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:58.180 [2024-11-05 16:27:11.014621] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:58.440 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:58.440 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:12:58.440 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:58.440 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.440 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.440 [2024-11-05 16:27:11.395158] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:58.440 [2024-11-05 16:27:11.395216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:58.440 [2024-11-05 16:27:11.395227] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:58.440 [2024-11-05 16:27:11.395237] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:58.440 [2024-11-05 16:27:11.395244] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:58.440 [2024-11-05 16:27:11.395253] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:58.440 [2024-11-05 16:27:11.395259] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:58.440 [2024-11-05 16:27:11.395268] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:58.440 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.440 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:58.440 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:58.440 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:58.440 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:58.440 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:58.440 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:58.440 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.440 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.440 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.440 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.440 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.440 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.440 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.440 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.440 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.440 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.440 "name": "Existed_Raid", 00:12:58.440 "uuid": "1decef23-b6ac-4471-8499-80adc78f0004", 00:12:58.440 "strip_size_kb": 64, 00:12:58.440 "state": "configuring", 00:12:58.440 "raid_level": "concat", 00:12:58.440 "superblock": true, 00:12:58.440 "num_base_bdevs": 4, 00:12:58.440 "num_base_bdevs_discovered": 0, 00:12:58.440 "num_base_bdevs_operational": 4, 00:12:58.440 "base_bdevs_list": [ 00:12:58.440 { 00:12:58.440 "name": "BaseBdev1", 00:12:58.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.440 "is_configured": false, 00:12:58.440 "data_offset": 0, 00:12:58.440 "data_size": 0 00:12:58.440 }, 00:12:58.440 { 00:12:58.440 "name": "BaseBdev2", 00:12:58.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.440 "is_configured": false, 00:12:58.440 "data_offset": 0, 00:12:58.440 "data_size": 0 00:12:58.440 }, 00:12:58.440 { 00:12:58.440 "name": "BaseBdev3", 00:12:58.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.440 "is_configured": false, 00:12:58.440 "data_offset": 0, 00:12:58.440 "data_size": 0 00:12:58.440 }, 00:12:58.440 { 00:12:58.440 "name": "BaseBdev4", 00:12:58.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.440 "is_configured": false, 00:12:58.440 "data_offset": 0, 00:12:58.440 "data_size": 0 00:12:58.440 } 00:12:58.440 ] 00:12:58.440 }' 00:12:58.440 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.440 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.009 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:59.009 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.009 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.009 [2024-11-05 16:27:11.882291] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:59.009 [2024-11-05 16:27:11.882412] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:59.009 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.009 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:59.009 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.009 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.009 [2024-11-05 16:27:11.890264] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:59.009 [2024-11-05 16:27:11.890349] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:59.009 [2024-11-05 16:27:11.890383] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:59.009 [2024-11-05 16:27:11.890407] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:59.009 [2024-11-05 16:27:11.890435] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:59.009 [2024-11-05 16:27:11.890458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:59.009 [2024-11-05 16:27:11.890476] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:59.009 [2024-11-05 16:27:11.890501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:59.009 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.009 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:59.009 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.009 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.009 [2024-11-05 16:27:11.935210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:59.009 BaseBdev1 00:12:59.009 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.009 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:59.009 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:59.009 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:59.009 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:59.009 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:59.009 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:59.009 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:59.009 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.009 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.009 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.009 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:59.009 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.009 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.009 [ 00:12:59.009 { 00:12:59.009 "name": "BaseBdev1", 00:12:59.009 "aliases": [ 00:12:59.009 "b0445f8b-a891-4cc6-900c-326532201318" 00:12:59.009 ], 00:12:59.009 "product_name": "Malloc disk", 00:12:59.009 "block_size": 512, 00:12:59.009 "num_blocks": 65536, 00:12:59.010 "uuid": "b0445f8b-a891-4cc6-900c-326532201318", 00:12:59.010 "assigned_rate_limits": { 00:12:59.010 "rw_ios_per_sec": 0, 00:12:59.010 "rw_mbytes_per_sec": 0, 00:12:59.010 "r_mbytes_per_sec": 0, 00:12:59.010 "w_mbytes_per_sec": 0 00:12:59.010 }, 00:12:59.010 "claimed": true, 00:12:59.010 "claim_type": "exclusive_write", 00:12:59.010 "zoned": false, 00:12:59.010 "supported_io_types": { 00:12:59.010 "read": true, 00:12:59.010 "write": true, 00:12:59.010 "unmap": true, 00:12:59.010 "flush": true, 00:12:59.010 "reset": true, 00:12:59.010 "nvme_admin": false, 00:12:59.010 "nvme_io": false, 00:12:59.010 "nvme_io_md": false, 00:12:59.010 "write_zeroes": true, 00:12:59.010 "zcopy": true, 00:12:59.010 "get_zone_info": false, 00:12:59.010 "zone_management": false, 00:12:59.010 "zone_append": false, 00:12:59.010 "compare": false, 00:12:59.010 "compare_and_write": false, 00:12:59.010 "abort": true, 00:12:59.010 "seek_hole": false, 00:12:59.010 "seek_data": false, 00:12:59.010 "copy": true, 00:12:59.010 "nvme_iov_md": false 00:12:59.010 }, 00:12:59.010 "memory_domains": [ 00:12:59.010 { 00:12:59.010 "dma_device_id": "system", 00:12:59.010 "dma_device_type": 1 00:12:59.010 }, 00:12:59.010 { 00:12:59.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.010 "dma_device_type": 2 00:12:59.010 } 00:12:59.010 ], 00:12:59.010 "driver_specific": {} 00:12:59.010 } 00:12:59.010 ] 00:12:59.010 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.010 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:59.010 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:59.010 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:59.010 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:59.010 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:59.010 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:59.010 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:59.010 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.010 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.010 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.010 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.010 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.010 16:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.010 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.010 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.010 16:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.010 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.010 "name": "Existed_Raid", 00:12:59.010 "uuid": "0dd2c6e8-95e8-4fbf-8fbf-acf3c0f92e1d", 00:12:59.010 "strip_size_kb": 64, 00:12:59.010 "state": "configuring", 00:12:59.010 "raid_level": "concat", 00:12:59.010 "superblock": true, 00:12:59.010 "num_base_bdevs": 4, 00:12:59.010 "num_base_bdevs_discovered": 1, 00:12:59.010 "num_base_bdevs_operational": 4, 00:12:59.010 "base_bdevs_list": [ 00:12:59.010 { 00:12:59.010 "name": "BaseBdev1", 00:12:59.010 "uuid": "b0445f8b-a891-4cc6-900c-326532201318", 00:12:59.010 "is_configured": true, 00:12:59.010 "data_offset": 2048, 00:12:59.010 "data_size": 63488 00:12:59.010 }, 00:12:59.010 { 00:12:59.010 "name": "BaseBdev2", 00:12:59.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.010 "is_configured": false, 00:12:59.010 "data_offset": 0, 00:12:59.010 "data_size": 0 00:12:59.010 }, 00:12:59.010 { 00:12:59.010 "name": "BaseBdev3", 00:12:59.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.010 "is_configured": false, 00:12:59.010 "data_offset": 0, 00:12:59.010 "data_size": 0 00:12:59.010 }, 00:12:59.010 { 00:12:59.010 "name": "BaseBdev4", 00:12:59.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.010 "is_configured": false, 00:12:59.010 "data_offset": 0, 00:12:59.010 "data_size": 0 00:12:59.010 } 00:12:59.010 ] 00:12:59.010 }' 00:12:59.010 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.010 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.578 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:59.578 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.578 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.578 [2024-11-05 16:27:12.430460] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:59.578 [2024-11-05 16:27:12.430614] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:59.578 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.578 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:59.578 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.578 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.578 [2024-11-05 16:27:12.442530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:59.578 [2024-11-05 16:27:12.444751] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:59.578 [2024-11-05 16:27:12.444853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:59.578 [2024-11-05 16:27:12.444898] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:59.578 [2024-11-05 16:27:12.444931] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:59.578 [2024-11-05 16:27:12.444966] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:59.578 [2024-11-05 16:27:12.444993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:59.578 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.578 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:59.578 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:59.578 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:59.578 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:59.578 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:59.578 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:59.578 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:59.579 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:59.579 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.579 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.579 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.579 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.579 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.579 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.579 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.579 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.579 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.579 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.579 "name": "Existed_Raid", 00:12:59.579 "uuid": "b3b833e5-c1b7-4ddd-8222-6b12533eea91", 00:12:59.579 "strip_size_kb": 64, 00:12:59.579 "state": "configuring", 00:12:59.579 "raid_level": "concat", 00:12:59.579 "superblock": true, 00:12:59.579 "num_base_bdevs": 4, 00:12:59.579 "num_base_bdevs_discovered": 1, 00:12:59.579 "num_base_bdevs_operational": 4, 00:12:59.579 "base_bdevs_list": [ 00:12:59.579 { 00:12:59.579 "name": "BaseBdev1", 00:12:59.579 "uuid": "b0445f8b-a891-4cc6-900c-326532201318", 00:12:59.579 "is_configured": true, 00:12:59.579 "data_offset": 2048, 00:12:59.579 "data_size": 63488 00:12:59.579 }, 00:12:59.579 { 00:12:59.579 "name": "BaseBdev2", 00:12:59.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.579 "is_configured": false, 00:12:59.579 "data_offset": 0, 00:12:59.579 "data_size": 0 00:12:59.579 }, 00:12:59.579 { 00:12:59.579 "name": "BaseBdev3", 00:12:59.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.579 "is_configured": false, 00:12:59.579 "data_offset": 0, 00:12:59.579 "data_size": 0 00:12:59.579 }, 00:12:59.579 { 00:12:59.579 "name": "BaseBdev4", 00:12:59.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.579 "is_configured": false, 00:12:59.579 "data_offset": 0, 00:12:59.579 "data_size": 0 00:12:59.579 } 00:12:59.579 ] 00:12:59.579 }' 00:12:59.579 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.579 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.838 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:59.838 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.838 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.098 [2024-11-05 16:27:12.948075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:00.098 BaseBdev2 00:13:00.098 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.098 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:00.098 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:00.098 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:00.098 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:00.098 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:00.098 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:00.098 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:00.098 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.098 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.098 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.098 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:00.098 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.098 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.098 [ 00:13:00.098 { 00:13:00.098 "name": "BaseBdev2", 00:13:00.098 "aliases": [ 00:13:00.098 "be5e8907-2c9e-4c56-92b9-fa16aa05145c" 00:13:00.098 ], 00:13:00.098 "product_name": "Malloc disk", 00:13:00.098 "block_size": 512, 00:13:00.098 "num_blocks": 65536, 00:13:00.098 "uuid": "be5e8907-2c9e-4c56-92b9-fa16aa05145c", 00:13:00.098 "assigned_rate_limits": { 00:13:00.098 "rw_ios_per_sec": 0, 00:13:00.098 "rw_mbytes_per_sec": 0, 00:13:00.098 "r_mbytes_per_sec": 0, 00:13:00.098 "w_mbytes_per_sec": 0 00:13:00.098 }, 00:13:00.098 "claimed": true, 00:13:00.098 "claim_type": "exclusive_write", 00:13:00.098 "zoned": false, 00:13:00.098 "supported_io_types": { 00:13:00.098 "read": true, 00:13:00.098 "write": true, 00:13:00.098 "unmap": true, 00:13:00.098 "flush": true, 00:13:00.098 "reset": true, 00:13:00.098 "nvme_admin": false, 00:13:00.098 "nvme_io": false, 00:13:00.098 "nvme_io_md": false, 00:13:00.098 "write_zeroes": true, 00:13:00.098 "zcopy": true, 00:13:00.098 "get_zone_info": false, 00:13:00.098 "zone_management": false, 00:13:00.098 "zone_append": false, 00:13:00.098 "compare": false, 00:13:00.098 "compare_and_write": false, 00:13:00.098 "abort": true, 00:13:00.098 "seek_hole": false, 00:13:00.098 "seek_data": false, 00:13:00.098 "copy": true, 00:13:00.098 "nvme_iov_md": false 00:13:00.098 }, 00:13:00.098 "memory_domains": [ 00:13:00.098 { 00:13:00.098 "dma_device_id": "system", 00:13:00.098 "dma_device_type": 1 00:13:00.098 }, 00:13:00.098 { 00:13:00.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.098 "dma_device_type": 2 00:13:00.098 } 00:13:00.098 ], 00:13:00.098 "driver_specific": {} 00:13:00.098 } 00:13:00.098 ] 00:13:00.098 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.098 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:00.098 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:00.098 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:00.098 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:00.098 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:00.098 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.098 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:00.098 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:00.098 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:00.098 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.098 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.099 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.099 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.099 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.099 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.099 16:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.099 16:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.099 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.099 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.099 "name": "Existed_Raid", 00:13:00.099 "uuid": "b3b833e5-c1b7-4ddd-8222-6b12533eea91", 00:13:00.099 "strip_size_kb": 64, 00:13:00.099 "state": "configuring", 00:13:00.099 "raid_level": "concat", 00:13:00.099 "superblock": true, 00:13:00.099 "num_base_bdevs": 4, 00:13:00.099 "num_base_bdevs_discovered": 2, 00:13:00.099 "num_base_bdevs_operational": 4, 00:13:00.099 "base_bdevs_list": [ 00:13:00.099 { 00:13:00.099 "name": "BaseBdev1", 00:13:00.099 "uuid": "b0445f8b-a891-4cc6-900c-326532201318", 00:13:00.099 "is_configured": true, 00:13:00.099 "data_offset": 2048, 00:13:00.099 "data_size": 63488 00:13:00.099 }, 00:13:00.099 { 00:13:00.099 "name": "BaseBdev2", 00:13:00.099 "uuid": "be5e8907-2c9e-4c56-92b9-fa16aa05145c", 00:13:00.099 "is_configured": true, 00:13:00.099 "data_offset": 2048, 00:13:00.099 "data_size": 63488 00:13:00.099 }, 00:13:00.099 { 00:13:00.099 "name": "BaseBdev3", 00:13:00.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.099 "is_configured": false, 00:13:00.099 "data_offset": 0, 00:13:00.099 "data_size": 0 00:13:00.099 }, 00:13:00.099 { 00:13:00.099 "name": "BaseBdev4", 00:13:00.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.099 "is_configured": false, 00:13:00.099 "data_offset": 0, 00:13:00.099 "data_size": 0 00:13:00.099 } 00:13:00.099 ] 00:13:00.099 }' 00:13:00.099 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.099 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.358 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:00.358 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.358 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.617 [2024-11-05 16:27:13.482296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:00.617 BaseBdev3 00:13:00.617 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.617 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:00.617 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:13:00.617 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:00.617 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:00.617 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:00.617 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:00.617 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:00.618 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.618 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.618 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.618 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:00.618 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.618 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.618 [ 00:13:00.618 { 00:13:00.618 "name": "BaseBdev3", 00:13:00.618 "aliases": [ 00:13:00.618 "cd80cae8-3483-45e1-b9a9-09cadd837ec8" 00:13:00.618 ], 00:13:00.618 "product_name": "Malloc disk", 00:13:00.618 "block_size": 512, 00:13:00.618 "num_blocks": 65536, 00:13:00.618 "uuid": "cd80cae8-3483-45e1-b9a9-09cadd837ec8", 00:13:00.618 "assigned_rate_limits": { 00:13:00.618 "rw_ios_per_sec": 0, 00:13:00.618 "rw_mbytes_per_sec": 0, 00:13:00.618 "r_mbytes_per_sec": 0, 00:13:00.618 "w_mbytes_per_sec": 0 00:13:00.618 }, 00:13:00.618 "claimed": true, 00:13:00.618 "claim_type": "exclusive_write", 00:13:00.618 "zoned": false, 00:13:00.618 "supported_io_types": { 00:13:00.618 "read": true, 00:13:00.618 "write": true, 00:13:00.618 "unmap": true, 00:13:00.618 "flush": true, 00:13:00.618 "reset": true, 00:13:00.618 "nvme_admin": false, 00:13:00.618 "nvme_io": false, 00:13:00.618 "nvme_io_md": false, 00:13:00.618 "write_zeroes": true, 00:13:00.618 "zcopy": true, 00:13:00.618 "get_zone_info": false, 00:13:00.618 "zone_management": false, 00:13:00.618 "zone_append": false, 00:13:00.618 "compare": false, 00:13:00.618 "compare_and_write": false, 00:13:00.618 "abort": true, 00:13:00.618 "seek_hole": false, 00:13:00.618 "seek_data": false, 00:13:00.618 "copy": true, 00:13:00.618 "nvme_iov_md": false 00:13:00.618 }, 00:13:00.618 "memory_domains": [ 00:13:00.618 { 00:13:00.618 "dma_device_id": "system", 00:13:00.618 "dma_device_type": 1 00:13:00.618 }, 00:13:00.618 { 00:13:00.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.618 "dma_device_type": 2 00:13:00.618 } 00:13:00.618 ], 00:13:00.618 "driver_specific": {} 00:13:00.618 } 00:13:00.618 ] 00:13:00.618 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.618 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:00.618 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:00.618 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:00.618 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:00.618 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:00.618 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.618 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:00.618 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:00.618 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:00.618 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.618 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.618 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.618 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.618 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.618 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.618 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.618 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.618 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.618 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.618 "name": "Existed_Raid", 00:13:00.618 "uuid": "b3b833e5-c1b7-4ddd-8222-6b12533eea91", 00:13:00.618 "strip_size_kb": 64, 00:13:00.618 "state": "configuring", 00:13:00.618 "raid_level": "concat", 00:13:00.618 "superblock": true, 00:13:00.618 "num_base_bdevs": 4, 00:13:00.618 "num_base_bdevs_discovered": 3, 00:13:00.618 "num_base_bdevs_operational": 4, 00:13:00.618 "base_bdevs_list": [ 00:13:00.618 { 00:13:00.618 "name": "BaseBdev1", 00:13:00.618 "uuid": "b0445f8b-a891-4cc6-900c-326532201318", 00:13:00.618 "is_configured": true, 00:13:00.618 "data_offset": 2048, 00:13:00.618 "data_size": 63488 00:13:00.618 }, 00:13:00.618 { 00:13:00.618 "name": "BaseBdev2", 00:13:00.618 "uuid": "be5e8907-2c9e-4c56-92b9-fa16aa05145c", 00:13:00.618 "is_configured": true, 00:13:00.618 "data_offset": 2048, 00:13:00.618 "data_size": 63488 00:13:00.618 }, 00:13:00.618 { 00:13:00.618 "name": "BaseBdev3", 00:13:00.618 "uuid": "cd80cae8-3483-45e1-b9a9-09cadd837ec8", 00:13:00.618 "is_configured": true, 00:13:00.618 "data_offset": 2048, 00:13:00.618 "data_size": 63488 00:13:00.618 }, 00:13:00.618 { 00:13:00.618 "name": "BaseBdev4", 00:13:00.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.618 "is_configured": false, 00:13:00.618 "data_offset": 0, 00:13:00.618 "data_size": 0 00:13:00.618 } 00:13:00.618 ] 00:13:00.618 }' 00:13:00.618 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.618 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.188 16:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:01.188 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.188 16:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.188 [2024-11-05 16:27:14.019128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:01.188 [2024-11-05 16:27:14.019437] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:01.188 [2024-11-05 16:27:14.019456] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:01.188 [2024-11-05 16:27:14.019799] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:01.188 BaseBdev4 00:13:01.188 [2024-11-05 16:27:14.020003] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:01.188 [2024-11-05 16:27:14.020028] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:01.188 [2024-11-05 16:27:14.020212] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.188 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.188 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:01.188 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:13:01.188 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:01.188 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:01.188 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:01.188 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:01.188 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:01.188 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.188 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.188 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.188 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:01.188 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.188 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.188 [ 00:13:01.188 { 00:13:01.188 "name": "BaseBdev4", 00:13:01.188 "aliases": [ 00:13:01.188 "77fa423c-3e4e-4bf4-9043-4098bccb0cd7" 00:13:01.188 ], 00:13:01.188 "product_name": "Malloc disk", 00:13:01.188 "block_size": 512, 00:13:01.188 "num_blocks": 65536, 00:13:01.188 "uuid": "77fa423c-3e4e-4bf4-9043-4098bccb0cd7", 00:13:01.188 "assigned_rate_limits": { 00:13:01.188 "rw_ios_per_sec": 0, 00:13:01.188 "rw_mbytes_per_sec": 0, 00:13:01.188 "r_mbytes_per_sec": 0, 00:13:01.188 "w_mbytes_per_sec": 0 00:13:01.188 }, 00:13:01.188 "claimed": true, 00:13:01.188 "claim_type": "exclusive_write", 00:13:01.188 "zoned": false, 00:13:01.188 "supported_io_types": { 00:13:01.188 "read": true, 00:13:01.188 "write": true, 00:13:01.188 "unmap": true, 00:13:01.188 "flush": true, 00:13:01.188 "reset": true, 00:13:01.188 "nvme_admin": false, 00:13:01.188 "nvme_io": false, 00:13:01.188 "nvme_io_md": false, 00:13:01.188 "write_zeroes": true, 00:13:01.188 "zcopy": true, 00:13:01.188 "get_zone_info": false, 00:13:01.188 "zone_management": false, 00:13:01.188 "zone_append": false, 00:13:01.188 "compare": false, 00:13:01.188 "compare_and_write": false, 00:13:01.188 "abort": true, 00:13:01.188 "seek_hole": false, 00:13:01.188 "seek_data": false, 00:13:01.188 "copy": true, 00:13:01.188 "nvme_iov_md": false 00:13:01.188 }, 00:13:01.188 "memory_domains": [ 00:13:01.188 { 00:13:01.188 "dma_device_id": "system", 00:13:01.188 "dma_device_type": 1 00:13:01.188 }, 00:13:01.188 { 00:13:01.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.188 "dma_device_type": 2 00:13:01.188 } 00:13:01.188 ], 00:13:01.188 "driver_specific": {} 00:13:01.188 } 00:13:01.188 ] 00:13:01.188 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.188 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:01.188 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:01.188 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:01.188 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:13:01.188 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:01.188 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.188 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:01.188 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.188 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:01.188 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.188 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.188 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.188 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.188 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.188 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.188 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.188 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.188 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.188 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.188 "name": "Existed_Raid", 00:13:01.188 "uuid": "b3b833e5-c1b7-4ddd-8222-6b12533eea91", 00:13:01.188 "strip_size_kb": 64, 00:13:01.188 "state": "online", 00:13:01.188 "raid_level": "concat", 00:13:01.188 "superblock": true, 00:13:01.188 "num_base_bdevs": 4, 00:13:01.188 "num_base_bdevs_discovered": 4, 00:13:01.188 "num_base_bdevs_operational": 4, 00:13:01.188 "base_bdevs_list": [ 00:13:01.188 { 00:13:01.188 "name": "BaseBdev1", 00:13:01.188 "uuid": "b0445f8b-a891-4cc6-900c-326532201318", 00:13:01.188 "is_configured": true, 00:13:01.188 "data_offset": 2048, 00:13:01.188 "data_size": 63488 00:13:01.188 }, 00:13:01.188 { 00:13:01.188 "name": "BaseBdev2", 00:13:01.188 "uuid": "be5e8907-2c9e-4c56-92b9-fa16aa05145c", 00:13:01.188 "is_configured": true, 00:13:01.188 "data_offset": 2048, 00:13:01.188 "data_size": 63488 00:13:01.188 }, 00:13:01.188 { 00:13:01.188 "name": "BaseBdev3", 00:13:01.188 "uuid": "cd80cae8-3483-45e1-b9a9-09cadd837ec8", 00:13:01.188 "is_configured": true, 00:13:01.188 "data_offset": 2048, 00:13:01.188 "data_size": 63488 00:13:01.188 }, 00:13:01.188 { 00:13:01.188 "name": "BaseBdev4", 00:13:01.188 "uuid": "77fa423c-3e4e-4bf4-9043-4098bccb0cd7", 00:13:01.188 "is_configured": true, 00:13:01.188 "data_offset": 2048, 00:13:01.188 "data_size": 63488 00:13:01.188 } 00:13:01.188 ] 00:13:01.188 }' 00:13:01.188 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.188 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.447 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:01.447 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:01.447 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:01.447 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:01.447 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:01.447 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:01.447 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:01.447 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:01.447 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.447 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.447 [2024-11-05 16:27:14.506904] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:01.447 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.706 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:01.706 "name": "Existed_Raid", 00:13:01.706 "aliases": [ 00:13:01.706 "b3b833e5-c1b7-4ddd-8222-6b12533eea91" 00:13:01.706 ], 00:13:01.706 "product_name": "Raid Volume", 00:13:01.706 "block_size": 512, 00:13:01.706 "num_blocks": 253952, 00:13:01.706 "uuid": "b3b833e5-c1b7-4ddd-8222-6b12533eea91", 00:13:01.706 "assigned_rate_limits": { 00:13:01.706 "rw_ios_per_sec": 0, 00:13:01.706 "rw_mbytes_per_sec": 0, 00:13:01.706 "r_mbytes_per_sec": 0, 00:13:01.706 "w_mbytes_per_sec": 0 00:13:01.706 }, 00:13:01.706 "claimed": false, 00:13:01.706 "zoned": false, 00:13:01.706 "supported_io_types": { 00:13:01.706 "read": true, 00:13:01.706 "write": true, 00:13:01.706 "unmap": true, 00:13:01.706 "flush": true, 00:13:01.706 "reset": true, 00:13:01.706 "nvme_admin": false, 00:13:01.706 "nvme_io": false, 00:13:01.706 "nvme_io_md": false, 00:13:01.706 "write_zeroes": true, 00:13:01.706 "zcopy": false, 00:13:01.706 "get_zone_info": false, 00:13:01.706 "zone_management": false, 00:13:01.706 "zone_append": false, 00:13:01.706 "compare": false, 00:13:01.706 "compare_and_write": false, 00:13:01.706 "abort": false, 00:13:01.706 "seek_hole": false, 00:13:01.706 "seek_data": false, 00:13:01.706 "copy": false, 00:13:01.706 "nvme_iov_md": false 00:13:01.706 }, 00:13:01.706 "memory_domains": [ 00:13:01.706 { 00:13:01.706 "dma_device_id": "system", 00:13:01.706 "dma_device_type": 1 00:13:01.706 }, 00:13:01.706 { 00:13:01.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.706 "dma_device_type": 2 00:13:01.706 }, 00:13:01.706 { 00:13:01.706 "dma_device_id": "system", 00:13:01.706 "dma_device_type": 1 00:13:01.706 }, 00:13:01.706 { 00:13:01.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.706 "dma_device_type": 2 00:13:01.706 }, 00:13:01.706 { 00:13:01.706 "dma_device_id": "system", 00:13:01.706 "dma_device_type": 1 00:13:01.706 }, 00:13:01.706 { 00:13:01.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.706 "dma_device_type": 2 00:13:01.706 }, 00:13:01.706 { 00:13:01.706 "dma_device_id": "system", 00:13:01.706 "dma_device_type": 1 00:13:01.706 }, 00:13:01.706 { 00:13:01.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.706 "dma_device_type": 2 00:13:01.706 } 00:13:01.706 ], 00:13:01.707 "driver_specific": { 00:13:01.707 "raid": { 00:13:01.707 "uuid": "b3b833e5-c1b7-4ddd-8222-6b12533eea91", 00:13:01.707 "strip_size_kb": 64, 00:13:01.707 "state": "online", 00:13:01.707 "raid_level": "concat", 00:13:01.707 "superblock": true, 00:13:01.707 "num_base_bdevs": 4, 00:13:01.707 "num_base_bdevs_discovered": 4, 00:13:01.707 "num_base_bdevs_operational": 4, 00:13:01.707 "base_bdevs_list": [ 00:13:01.707 { 00:13:01.707 "name": "BaseBdev1", 00:13:01.707 "uuid": "b0445f8b-a891-4cc6-900c-326532201318", 00:13:01.707 "is_configured": true, 00:13:01.707 "data_offset": 2048, 00:13:01.707 "data_size": 63488 00:13:01.707 }, 00:13:01.707 { 00:13:01.707 "name": "BaseBdev2", 00:13:01.707 "uuid": "be5e8907-2c9e-4c56-92b9-fa16aa05145c", 00:13:01.707 "is_configured": true, 00:13:01.707 "data_offset": 2048, 00:13:01.707 "data_size": 63488 00:13:01.707 }, 00:13:01.707 { 00:13:01.707 "name": "BaseBdev3", 00:13:01.707 "uuid": "cd80cae8-3483-45e1-b9a9-09cadd837ec8", 00:13:01.707 "is_configured": true, 00:13:01.707 "data_offset": 2048, 00:13:01.707 "data_size": 63488 00:13:01.707 }, 00:13:01.707 { 00:13:01.707 "name": "BaseBdev4", 00:13:01.707 "uuid": "77fa423c-3e4e-4bf4-9043-4098bccb0cd7", 00:13:01.707 "is_configured": true, 00:13:01.707 "data_offset": 2048, 00:13:01.707 "data_size": 63488 00:13:01.707 } 00:13:01.707 ] 00:13:01.707 } 00:13:01.707 } 00:13:01.707 }' 00:13:01.707 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:01.707 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:01.707 BaseBdev2 00:13:01.707 BaseBdev3 00:13:01.707 BaseBdev4' 00:13:01.707 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.707 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:01.707 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:01.707 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:01.707 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.707 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.707 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.707 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.707 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:01.707 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:01.707 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:01.707 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:01.707 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.707 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.707 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.707 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.707 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:01.707 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:01.707 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:01.707 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:01.707 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.707 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.707 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.707 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.707 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:01.707 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:01.707 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:01.707 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:01.707 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.707 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.707 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.966 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.966 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:01.966 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:01.966 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:01.966 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.966 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.966 [2024-11-05 16:27:14.846003] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:01.966 [2024-11-05 16:27:14.846162] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:01.966 [2024-11-05 16:27:14.846264] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:01.966 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.966 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:01.966 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:13:01.966 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:01.966 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:13:01.966 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:01.966 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:13:01.966 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:01.966 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:01.966 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:01.966 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.966 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:01.966 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.966 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.966 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.966 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.966 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.966 16:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.966 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.966 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.966 16:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.966 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.966 "name": "Existed_Raid", 00:13:01.966 "uuid": "b3b833e5-c1b7-4ddd-8222-6b12533eea91", 00:13:01.966 "strip_size_kb": 64, 00:13:01.966 "state": "offline", 00:13:01.966 "raid_level": "concat", 00:13:01.966 "superblock": true, 00:13:01.966 "num_base_bdevs": 4, 00:13:01.966 "num_base_bdevs_discovered": 3, 00:13:01.966 "num_base_bdevs_operational": 3, 00:13:01.966 "base_bdevs_list": [ 00:13:01.966 { 00:13:01.966 "name": null, 00:13:01.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.966 "is_configured": false, 00:13:01.966 "data_offset": 0, 00:13:01.966 "data_size": 63488 00:13:01.966 }, 00:13:01.966 { 00:13:01.967 "name": "BaseBdev2", 00:13:01.967 "uuid": "be5e8907-2c9e-4c56-92b9-fa16aa05145c", 00:13:01.967 "is_configured": true, 00:13:01.967 "data_offset": 2048, 00:13:01.967 "data_size": 63488 00:13:01.967 }, 00:13:01.967 { 00:13:01.967 "name": "BaseBdev3", 00:13:01.967 "uuid": "cd80cae8-3483-45e1-b9a9-09cadd837ec8", 00:13:01.967 "is_configured": true, 00:13:01.967 "data_offset": 2048, 00:13:01.967 "data_size": 63488 00:13:01.967 }, 00:13:01.967 { 00:13:01.967 "name": "BaseBdev4", 00:13:01.967 "uuid": "77fa423c-3e4e-4bf4-9043-4098bccb0cd7", 00:13:01.967 "is_configured": true, 00:13:01.967 "data_offset": 2048, 00:13:01.967 "data_size": 63488 00:13:01.967 } 00:13:01.967 ] 00:13:01.967 }' 00:13:01.967 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.967 16:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.535 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:02.535 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:02.535 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.535 16:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.535 16:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.535 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:02.535 16:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.535 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:02.535 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:02.535 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:02.535 16:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.535 16:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.535 [2024-11-05 16:27:15.489437] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:02.535 16:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.535 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:02.535 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:02.535 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.535 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:02.535 16:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.535 16:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.794 16:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.794 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:02.794 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:02.794 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:02.794 16:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.794 16:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.794 [2024-11-05 16:27:15.668350] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:02.794 16:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.794 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:02.794 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:02.794 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.794 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:02.794 16:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.794 16:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.794 16:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.794 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:02.794 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:02.794 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:02.794 16:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.794 16:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.794 [2024-11-05 16:27:15.839354] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:02.794 [2024-11-05 16:27:15.839437] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:03.053 16:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.053 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:03.053 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:03.053 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.053 16:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:03.053 16:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.053 16:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.053 16:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.054 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:03.054 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:03.054 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:03.054 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:03.054 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:03.054 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:03.054 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.054 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.054 BaseBdev2 00:13:03.054 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.054 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:03.054 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:03.054 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:03.054 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:03.054 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:03.054 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:03.054 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:03.054 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.054 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.054 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.054 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:03.054 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.054 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.054 [ 00:13:03.054 { 00:13:03.054 "name": "BaseBdev2", 00:13:03.054 "aliases": [ 00:13:03.054 "0e47a36f-f2cd-406f-a91c-1e6ae24fd7ba" 00:13:03.054 ], 00:13:03.054 "product_name": "Malloc disk", 00:13:03.054 "block_size": 512, 00:13:03.054 "num_blocks": 65536, 00:13:03.054 "uuid": "0e47a36f-f2cd-406f-a91c-1e6ae24fd7ba", 00:13:03.054 "assigned_rate_limits": { 00:13:03.054 "rw_ios_per_sec": 0, 00:13:03.054 "rw_mbytes_per_sec": 0, 00:13:03.054 "r_mbytes_per_sec": 0, 00:13:03.054 "w_mbytes_per_sec": 0 00:13:03.054 }, 00:13:03.054 "claimed": false, 00:13:03.054 "zoned": false, 00:13:03.054 "supported_io_types": { 00:13:03.054 "read": true, 00:13:03.054 "write": true, 00:13:03.054 "unmap": true, 00:13:03.054 "flush": true, 00:13:03.054 "reset": true, 00:13:03.054 "nvme_admin": false, 00:13:03.054 "nvme_io": false, 00:13:03.054 "nvme_io_md": false, 00:13:03.054 "write_zeroes": true, 00:13:03.054 "zcopy": true, 00:13:03.054 "get_zone_info": false, 00:13:03.054 "zone_management": false, 00:13:03.054 "zone_append": false, 00:13:03.054 "compare": false, 00:13:03.054 "compare_and_write": false, 00:13:03.054 "abort": true, 00:13:03.054 "seek_hole": false, 00:13:03.054 "seek_data": false, 00:13:03.054 "copy": true, 00:13:03.054 "nvme_iov_md": false 00:13:03.054 }, 00:13:03.054 "memory_domains": [ 00:13:03.054 { 00:13:03.054 "dma_device_id": "system", 00:13:03.054 "dma_device_type": 1 00:13:03.054 }, 00:13:03.054 { 00:13:03.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.054 "dma_device_type": 2 00:13:03.054 } 00:13:03.054 ], 00:13:03.054 "driver_specific": {} 00:13:03.054 } 00:13:03.054 ] 00:13:03.054 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.054 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:03.054 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:03.054 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:03.054 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:03.054 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.054 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.314 BaseBdev3 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.314 [ 00:13:03.314 { 00:13:03.314 "name": "BaseBdev3", 00:13:03.314 "aliases": [ 00:13:03.314 "378dbc6d-9fb6-4771-83b9-f581754e8c78" 00:13:03.314 ], 00:13:03.314 "product_name": "Malloc disk", 00:13:03.314 "block_size": 512, 00:13:03.314 "num_blocks": 65536, 00:13:03.314 "uuid": "378dbc6d-9fb6-4771-83b9-f581754e8c78", 00:13:03.314 "assigned_rate_limits": { 00:13:03.314 "rw_ios_per_sec": 0, 00:13:03.314 "rw_mbytes_per_sec": 0, 00:13:03.314 "r_mbytes_per_sec": 0, 00:13:03.314 "w_mbytes_per_sec": 0 00:13:03.314 }, 00:13:03.314 "claimed": false, 00:13:03.314 "zoned": false, 00:13:03.314 "supported_io_types": { 00:13:03.314 "read": true, 00:13:03.314 "write": true, 00:13:03.314 "unmap": true, 00:13:03.314 "flush": true, 00:13:03.314 "reset": true, 00:13:03.314 "nvme_admin": false, 00:13:03.314 "nvme_io": false, 00:13:03.314 "nvme_io_md": false, 00:13:03.314 "write_zeroes": true, 00:13:03.314 "zcopy": true, 00:13:03.314 "get_zone_info": false, 00:13:03.314 "zone_management": false, 00:13:03.314 "zone_append": false, 00:13:03.314 "compare": false, 00:13:03.314 "compare_and_write": false, 00:13:03.314 "abort": true, 00:13:03.314 "seek_hole": false, 00:13:03.314 "seek_data": false, 00:13:03.314 "copy": true, 00:13:03.314 "nvme_iov_md": false 00:13:03.314 }, 00:13:03.314 "memory_domains": [ 00:13:03.314 { 00:13:03.314 "dma_device_id": "system", 00:13:03.314 "dma_device_type": 1 00:13:03.314 }, 00:13:03.314 { 00:13:03.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.314 "dma_device_type": 2 00:13:03.314 } 00:13:03.314 ], 00:13:03.314 "driver_specific": {} 00:13:03.314 } 00:13:03.314 ] 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.314 BaseBdev4 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.314 [ 00:13:03.314 { 00:13:03.314 "name": "BaseBdev4", 00:13:03.314 "aliases": [ 00:13:03.314 "9aa43041-aac5-4c49-bfe6-1157990d98d5" 00:13:03.314 ], 00:13:03.314 "product_name": "Malloc disk", 00:13:03.314 "block_size": 512, 00:13:03.314 "num_blocks": 65536, 00:13:03.314 "uuid": "9aa43041-aac5-4c49-bfe6-1157990d98d5", 00:13:03.314 "assigned_rate_limits": { 00:13:03.314 "rw_ios_per_sec": 0, 00:13:03.314 "rw_mbytes_per_sec": 0, 00:13:03.314 "r_mbytes_per_sec": 0, 00:13:03.314 "w_mbytes_per_sec": 0 00:13:03.314 }, 00:13:03.314 "claimed": false, 00:13:03.314 "zoned": false, 00:13:03.314 "supported_io_types": { 00:13:03.314 "read": true, 00:13:03.314 "write": true, 00:13:03.314 "unmap": true, 00:13:03.314 "flush": true, 00:13:03.314 "reset": true, 00:13:03.314 "nvme_admin": false, 00:13:03.314 "nvme_io": false, 00:13:03.314 "nvme_io_md": false, 00:13:03.314 "write_zeroes": true, 00:13:03.314 "zcopy": true, 00:13:03.314 "get_zone_info": false, 00:13:03.314 "zone_management": false, 00:13:03.314 "zone_append": false, 00:13:03.314 "compare": false, 00:13:03.314 "compare_and_write": false, 00:13:03.314 "abort": true, 00:13:03.314 "seek_hole": false, 00:13:03.314 "seek_data": false, 00:13:03.314 "copy": true, 00:13:03.314 "nvme_iov_md": false 00:13:03.314 }, 00:13:03.314 "memory_domains": [ 00:13:03.314 { 00:13:03.314 "dma_device_id": "system", 00:13:03.314 "dma_device_type": 1 00:13:03.314 }, 00:13:03.314 { 00:13:03.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.314 "dma_device_type": 2 00:13:03.314 } 00:13:03.314 ], 00:13:03.314 "driver_specific": {} 00:13:03.314 } 00:13:03.314 ] 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.314 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.314 [2024-11-05 16:27:16.293664] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:03.314 [2024-11-05 16:27:16.293848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:03.314 [2024-11-05 16:27:16.293914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:03.314 [2024-11-05 16:27:16.296496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:03.314 [2024-11-05 16:27:16.296649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:03.315 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.315 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:03.315 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:03.315 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:03.315 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:03.315 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:03.315 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:03.315 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.315 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.315 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.315 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.315 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.315 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.315 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.315 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.315 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.315 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.315 "name": "Existed_Raid", 00:13:03.315 "uuid": "c86b9313-9560-4a22-8403-2b4da1859141", 00:13:03.315 "strip_size_kb": 64, 00:13:03.315 "state": "configuring", 00:13:03.315 "raid_level": "concat", 00:13:03.315 "superblock": true, 00:13:03.315 "num_base_bdevs": 4, 00:13:03.315 "num_base_bdevs_discovered": 3, 00:13:03.315 "num_base_bdevs_operational": 4, 00:13:03.315 "base_bdevs_list": [ 00:13:03.315 { 00:13:03.315 "name": "BaseBdev1", 00:13:03.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.315 "is_configured": false, 00:13:03.315 "data_offset": 0, 00:13:03.315 "data_size": 0 00:13:03.315 }, 00:13:03.315 { 00:13:03.315 "name": "BaseBdev2", 00:13:03.315 "uuid": "0e47a36f-f2cd-406f-a91c-1e6ae24fd7ba", 00:13:03.315 "is_configured": true, 00:13:03.315 "data_offset": 2048, 00:13:03.315 "data_size": 63488 00:13:03.315 }, 00:13:03.315 { 00:13:03.315 "name": "BaseBdev3", 00:13:03.315 "uuid": "378dbc6d-9fb6-4771-83b9-f581754e8c78", 00:13:03.315 "is_configured": true, 00:13:03.315 "data_offset": 2048, 00:13:03.315 "data_size": 63488 00:13:03.315 }, 00:13:03.315 { 00:13:03.315 "name": "BaseBdev4", 00:13:03.315 "uuid": "9aa43041-aac5-4c49-bfe6-1157990d98d5", 00:13:03.315 "is_configured": true, 00:13:03.315 "data_offset": 2048, 00:13:03.315 "data_size": 63488 00:13:03.315 } 00:13:03.315 ] 00:13:03.315 }' 00:13:03.315 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.315 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.883 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:03.883 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.883 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.883 [2024-11-05 16:27:16.792791] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:03.883 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.883 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:03.883 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:03.883 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:03.883 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:03.883 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:03.883 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:03.883 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.883 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.883 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.883 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.883 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.883 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.883 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.883 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.883 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.883 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.883 "name": "Existed_Raid", 00:13:03.883 "uuid": "c86b9313-9560-4a22-8403-2b4da1859141", 00:13:03.883 "strip_size_kb": 64, 00:13:03.883 "state": "configuring", 00:13:03.883 "raid_level": "concat", 00:13:03.883 "superblock": true, 00:13:03.883 "num_base_bdevs": 4, 00:13:03.883 "num_base_bdevs_discovered": 2, 00:13:03.883 "num_base_bdevs_operational": 4, 00:13:03.883 "base_bdevs_list": [ 00:13:03.883 { 00:13:03.883 "name": "BaseBdev1", 00:13:03.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.883 "is_configured": false, 00:13:03.883 "data_offset": 0, 00:13:03.883 "data_size": 0 00:13:03.883 }, 00:13:03.883 { 00:13:03.883 "name": null, 00:13:03.883 "uuid": "0e47a36f-f2cd-406f-a91c-1e6ae24fd7ba", 00:13:03.883 "is_configured": false, 00:13:03.883 "data_offset": 0, 00:13:03.883 "data_size": 63488 00:13:03.883 }, 00:13:03.883 { 00:13:03.883 "name": "BaseBdev3", 00:13:03.883 "uuid": "378dbc6d-9fb6-4771-83b9-f581754e8c78", 00:13:03.883 "is_configured": true, 00:13:03.883 "data_offset": 2048, 00:13:03.883 "data_size": 63488 00:13:03.883 }, 00:13:03.883 { 00:13:03.883 "name": "BaseBdev4", 00:13:03.883 "uuid": "9aa43041-aac5-4c49-bfe6-1157990d98d5", 00:13:03.883 "is_configured": true, 00:13:03.883 "data_offset": 2048, 00:13:03.883 "data_size": 63488 00:13:03.883 } 00:13:03.883 ] 00:13:03.883 }' 00:13:03.883 16:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.883 16:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.451 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.451 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.451 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.451 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:04.451 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.451 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:04.451 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:04.451 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.451 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.451 [2024-11-05 16:27:17.370322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:04.451 BaseBdev1 00:13:04.451 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.451 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:04.451 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:13:04.451 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:04.451 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:04.451 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:04.451 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:04.451 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:04.451 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.451 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.451 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.451 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:04.451 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.451 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.451 [ 00:13:04.451 { 00:13:04.451 "name": "BaseBdev1", 00:13:04.451 "aliases": [ 00:13:04.451 "001da91d-bcad-4317-b0cb-b53eedc3c770" 00:13:04.451 ], 00:13:04.451 "product_name": "Malloc disk", 00:13:04.451 "block_size": 512, 00:13:04.451 "num_blocks": 65536, 00:13:04.451 "uuid": "001da91d-bcad-4317-b0cb-b53eedc3c770", 00:13:04.451 "assigned_rate_limits": { 00:13:04.451 "rw_ios_per_sec": 0, 00:13:04.451 "rw_mbytes_per_sec": 0, 00:13:04.451 "r_mbytes_per_sec": 0, 00:13:04.451 "w_mbytes_per_sec": 0 00:13:04.451 }, 00:13:04.451 "claimed": true, 00:13:04.451 "claim_type": "exclusive_write", 00:13:04.451 "zoned": false, 00:13:04.451 "supported_io_types": { 00:13:04.451 "read": true, 00:13:04.451 "write": true, 00:13:04.451 "unmap": true, 00:13:04.451 "flush": true, 00:13:04.451 "reset": true, 00:13:04.451 "nvme_admin": false, 00:13:04.451 "nvme_io": false, 00:13:04.451 "nvme_io_md": false, 00:13:04.451 "write_zeroes": true, 00:13:04.451 "zcopy": true, 00:13:04.451 "get_zone_info": false, 00:13:04.451 "zone_management": false, 00:13:04.451 "zone_append": false, 00:13:04.451 "compare": false, 00:13:04.451 "compare_and_write": false, 00:13:04.451 "abort": true, 00:13:04.451 "seek_hole": false, 00:13:04.451 "seek_data": false, 00:13:04.451 "copy": true, 00:13:04.451 "nvme_iov_md": false 00:13:04.451 }, 00:13:04.451 "memory_domains": [ 00:13:04.451 { 00:13:04.451 "dma_device_id": "system", 00:13:04.451 "dma_device_type": 1 00:13:04.451 }, 00:13:04.451 { 00:13:04.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.451 "dma_device_type": 2 00:13:04.451 } 00:13:04.451 ], 00:13:04.451 "driver_specific": {} 00:13:04.451 } 00:13:04.451 ] 00:13:04.451 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.451 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:04.451 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:04.451 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:04.451 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:04.451 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:04.451 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:04.451 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:04.451 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.451 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.451 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.451 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.451 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.451 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.451 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.451 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.451 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.451 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.451 "name": "Existed_Raid", 00:13:04.451 "uuid": "c86b9313-9560-4a22-8403-2b4da1859141", 00:13:04.451 "strip_size_kb": 64, 00:13:04.451 "state": "configuring", 00:13:04.451 "raid_level": "concat", 00:13:04.451 "superblock": true, 00:13:04.451 "num_base_bdevs": 4, 00:13:04.451 "num_base_bdevs_discovered": 3, 00:13:04.451 "num_base_bdevs_operational": 4, 00:13:04.451 "base_bdevs_list": [ 00:13:04.451 { 00:13:04.451 "name": "BaseBdev1", 00:13:04.451 "uuid": "001da91d-bcad-4317-b0cb-b53eedc3c770", 00:13:04.451 "is_configured": true, 00:13:04.451 "data_offset": 2048, 00:13:04.451 "data_size": 63488 00:13:04.451 }, 00:13:04.451 { 00:13:04.451 "name": null, 00:13:04.451 "uuid": "0e47a36f-f2cd-406f-a91c-1e6ae24fd7ba", 00:13:04.451 "is_configured": false, 00:13:04.451 "data_offset": 0, 00:13:04.451 "data_size": 63488 00:13:04.451 }, 00:13:04.451 { 00:13:04.451 "name": "BaseBdev3", 00:13:04.451 "uuid": "378dbc6d-9fb6-4771-83b9-f581754e8c78", 00:13:04.451 "is_configured": true, 00:13:04.451 "data_offset": 2048, 00:13:04.451 "data_size": 63488 00:13:04.451 }, 00:13:04.451 { 00:13:04.451 "name": "BaseBdev4", 00:13:04.451 "uuid": "9aa43041-aac5-4c49-bfe6-1157990d98d5", 00:13:04.451 "is_configured": true, 00:13:04.451 "data_offset": 2048, 00:13:04.451 "data_size": 63488 00:13:04.451 } 00:13:04.451 ] 00:13:04.451 }' 00:13:04.452 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.452 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.020 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.020 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:05.020 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.020 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.020 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.020 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:05.020 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:05.020 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.020 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.020 [2024-11-05 16:27:17.953465] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:05.020 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.020 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:05.020 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:05.020 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:05.020 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:05.020 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:05.020 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:05.020 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.020 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.020 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.020 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.020 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.020 16:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:05.020 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.020 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.020 16:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.020 16:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.020 "name": "Existed_Raid", 00:13:05.020 "uuid": "c86b9313-9560-4a22-8403-2b4da1859141", 00:13:05.020 "strip_size_kb": 64, 00:13:05.020 "state": "configuring", 00:13:05.020 "raid_level": "concat", 00:13:05.020 "superblock": true, 00:13:05.020 "num_base_bdevs": 4, 00:13:05.020 "num_base_bdevs_discovered": 2, 00:13:05.020 "num_base_bdevs_operational": 4, 00:13:05.020 "base_bdevs_list": [ 00:13:05.020 { 00:13:05.020 "name": "BaseBdev1", 00:13:05.020 "uuid": "001da91d-bcad-4317-b0cb-b53eedc3c770", 00:13:05.020 "is_configured": true, 00:13:05.020 "data_offset": 2048, 00:13:05.020 "data_size": 63488 00:13:05.020 }, 00:13:05.020 { 00:13:05.020 "name": null, 00:13:05.020 "uuid": "0e47a36f-f2cd-406f-a91c-1e6ae24fd7ba", 00:13:05.020 "is_configured": false, 00:13:05.020 "data_offset": 0, 00:13:05.020 "data_size": 63488 00:13:05.020 }, 00:13:05.020 { 00:13:05.020 "name": null, 00:13:05.020 "uuid": "378dbc6d-9fb6-4771-83b9-f581754e8c78", 00:13:05.020 "is_configured": false, 00:13:05.020 "data_offset": 0, 00:13:05.020 "data_size": 63488 00:13:05.020 }, 00:13:05.020 { 00:13:05.020 "name": "BaseBdev4", 00:13:05.020 "uuid": "9aa43041-aac5-4c49-bfe6-1157990d98d5", 00:13:05.020 "is_configured": true, 00:13:05.020 "data_offset": 2048, 00:13:05.020 "data_size": 63488 00:13:05.020 } 00:13:05.020 ] 00:13:05.020 }' 00:13:05.020 16:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.020 16:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.587 16:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.587 16:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.587 16:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.587 16:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:05.587 16:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.587 16:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:05.587 16:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:05.587 16:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.587 16:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.587 [2024-11-05 16:27:18.476774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:05.587 16:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.587 16:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:05.587 16:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:05.587 16:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:05.587 16:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:05.587 16:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:05.587 16:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:05.587 16:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.587 16:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.587 16:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.587 16:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.587 16:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.587 16:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.587 16:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.587 16:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:05.587 16:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.587 16:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.587 "name": "Existed_Raid", 00:13:05.587 "uuid": "c86b9313-9560-4a22-8403-2b4da1859141", 00:13:05.587 "strip_size_kb": 64, 00:13:05.587 "state": "configuring", 00:13:05.587 "raid_level": "concat", 00:13:05.587 "superblock": true, 00:13:05.587 "num_base_bdevs": 4, 00:13:05.587 "num_base_bdevs_discovered": 3, 00:13:05.587 "num_base_bdevs_operational": 4, 00:13:05.587 "base_bdevs_list": [ 00:13:05.587 { 00:13:05.587 "name": "BaseBdev1", 00:13:05.587 "uuid": "001da91d-bcad-4317-b0cb-b53eedc3c770", 00:13:05.587 "is_configured": true, 00:13:05.587 "data_offset": 2048, 00:13:05.587 "data_size": 63488 00:13:05.587 }, 00:13:05.587 { 00:13:05.587 "name": null, 00:13:05.587 "uuid": "0e47a36f-f2cd-406f-a91c-1e6ae24fd7ba", 00:13:05.587 "is_configured": false, 00:13:05.587 "data_offset": 0, 00:13:05.587 "data_size": 63488 00:13:05.587 }, 00:13:05.587 { 00:13:05.587 "name": "BaseBdev3", 00:13:05.587 "uuid": "378dbc6d-9fb6-4771-83b9-f581754e8c78", 00:13:05.587 "is_configured": true, 00:13:05.587 "data_offset": 2048, 00:13:05.587 "data_size": 63488 00:13:05.587 }, 00:13:05.587 { 00:13:05.587 "name": "BaseBdev4", 00:13:05.587 "uuid": "9aa43041-aac5-4c49-bfe6-1157990d98d5", 00:13:05.587 "is_configured": true, 00:13:05.587 "data_offset": 2048, 00:13:05.587 "data_size": 63488 00:13:05.587 } 00:13:05.587 ] 00:13:05.587 }' 00:13:05.587 16:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.587 16:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.846 16:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.846 16:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:05.846 16:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.846 16:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.846 16:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.846 16:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:05.846 16:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:05.846 16:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.846 16:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.106 [2024-11-05 16:27:18.940835] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:06.106 16:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.106 16:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:06.106 16:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:06.106 16:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:06.106 16:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:06.106 16:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.106 16:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:06.106 16:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.106 16:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.106 16:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.106 16:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.106 16:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.106 16:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.106 16:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.106 16:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.106 16:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.106 16:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.106 "name": "Existed_Raid", 00:13:06.106 "uuid": "c86b9313-9560-4a22-8403-2b4da1859141", 00:13:06.106 "strip_size_kb": 64, 00:13:06.106 "state": "configuring", 00:13:06.106 "raid_level": "concat", 00:13:06.106 "superblock": true, 00:13:06.106 "num_base_bdevs": 4, 00:13:06.106 "num_base_bdevs_discovered": 2, 00:13:06.106 "num_base_bdevs_operational": 4, 00:13:06.106 "base_bdevs_list": [ 00:13:06.106 { 00:13:06.106 "name": null, 00:13:06.106 "uuid": "001da91d-bcad-4317-b0cb-b53eedc3c770", 00:13:06.106 "is_configured": false, 00:13:06.106 "data_offset": 0, 00:13:06.106 "data_size": 63488 00:13:06.106 }, 00:13:06.106 { 00:13:06.106 "name": null, 00:13:06.106 "uuid": "0e47a36f-f2cd-406f-a91c-1e6ae24fd7ba", 00:13:06.106 "is_configured": false, 00:13:06.106 "data_offset": 0, 00:13:06.106 "data_size": 63488 00:13:06.106 }, 00:13:06.106 { 00:13:06.106 "name": "BaseBdev3", 00:13:06.106 "uuid": "378dbc6d-9fb6-4771-83b9-f581754e8c78", 00:13:06.106 "is_configured": true, 00:13:06.106 "data_offset": 2048, 00:13:06.106 "data_size": 63488 00:13:06.106 }, 00:13:06.106 { 00:13:06.106 "name": "BaseBdev4", 00:13:06.106 "uuid": "9aa43041-aac5-4c49-bfe6-1157990d98d5", 00:13:06.106 "is_configured": true, 00:13:06.106 "data_offset": 2048, 00:13:06.106 "data_size": 63488 00:13:06.106 } 00:13:06.106 ] 00:13:06.106 }' 00:13:06.106 16:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.106 16:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.706 16:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:06.706 16:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.706 16:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.706 16:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.706 16:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.706 16:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:06.706 16:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:06.706 16:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.706 16:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.706 [2024-11-05 16:27:19.622243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:06.706 16:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.706 16:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:06.706 16:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:06.706 16:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:06.706 16:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:06.706 16:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.706 16:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:06.706 16:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.706 16:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.706 16:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.706 16:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.706 16:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.706 16:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.706 16:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.706 16:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.706 16:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.706 16:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.706 "name": "Existed_Raid", 00:13:06.706 "uuid": "c86b9313-9560-4a22-8403-2b4da1859141", 00:13:06.706 "strip_size_kb": 64, 00:13:06.706 "state": "configuring", 00:13:06.706 "raid_level": "concat", 00:13:06.706 "superblock": true, 00:13:06.706 "num_base_bdevs": 4, 00:13:06.706 "num_base_bdevs_discovered": 3, 00:13:06.706 "num_base_bdevs_operational": 4, 00:13:06.706 "base_bdevs_list": [ 00:13:06.706 { 00:13:06.706 "name": null, 00:13:06.706 "uuid": "001da91d-bcad-4317-b0cb-b53eedc3c770", 00:13:06.706 "is_configured": false, 00:13:06.706 "data_offset": 0, 00:13:06.706 "data_size": 63488 00:13:06.706 }, 00:13:06.706 { 00:13:06.706 "name": "BaseBdev2", 00:13:06.706 "uuid": "0e47a36f-f2cd-406f-a91c-1e6ae24fd7ba", 00:13:06.706 "is_configured": true, 00:13:06.706 "data_offset": 2048, 00:13:06.706 "data_size": 63488 00:13:06.706 }, 00:13:06.706 { 00:13:06.706 "name": "BaseBdev3", 00:13:06.706 "uuid": "378dbc6d-9fb6-4771-83b9-f581754e8c78", 00:13:06.706 "is_configured": true, 00:13:06.706 "data_offset": 2048, 00:13:06.706 "data_size": 63488 00:13:06.706 }, 00:13:06.706 { 00:13:06.706 "name": "BaseBdev4", 00:13:06.706 "uuid": "9aa43041-aac5-4c49-bfe6-1157990d98d5", 00:13:06.706 "is_configured": true, 00:13:06.706 "data_offset": 2048, 00:13:06.706 "data_size": 63488 00:13:06.706 } 00:13:06.706 ] 00:13:06.706 }' 00:13:06.706 16:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.706 16:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.283 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.283 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:07.283 16:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.283 16:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.283 16:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.283 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:07.283 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.283 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:07.283 16:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.283 16:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.283 16:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.283 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 001da91d-bcad-4317-b0cb-b53eedc3c770 00:13:07.283 16:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.283 16:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.283 [2024-11-05 16:27:20.286405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:07.283 [2024-11-05 16:27:20.286749] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:07.283 [2024-11-05 16:27:20.286763] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:07.283 [2024-11-05 16:27:20.287066] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:07.283 [2024-11-05 16:27:20.287244] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:07.283 [2024-11-05 16:27:20.287258] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:07.283 NewBaseBdev 00:13:07.283 [2024-11-05 16:27:20.287432] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:07.283 16:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.283 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:07.283 16:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:13:07.283 16:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:07.283 16:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:07.283 16:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:07.283 16:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:07.283 16:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:07.283 16:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.283 16:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.283 16:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.283 16:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:07.283 16:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.283 16:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.283 [ 00:13:07.283 { 00:13:07.283 "name": "NewBaseBdev", 00:13:07.283 "aliases": [ 00:13:07.283 "001da91d-bcad-4317-b0cb-b53eedc3c770" 00:13:07.283 ], 00:13:07.283 "product_name": "Malloc disk", 00:13:07.283 "block_size": 512, 00:13:07.283 "num_blocks": 65536, 00:13:07.284 "uuid": "001da91d-bcad-4317-b0cb-b53eedc3c770", 00:13:07.284 "assigned_rate_limits": { 00:13:07.284 "rw_ios_per_sec": 0, 00:13:07.284 "rw_mbytes_per_sec": 0, 00:13:07.284 "r_mbytes_per_sec": 0, 00:13:07.284 "w_mbytes_per_sec": 0 00:13:07.284 }, 00:13:07.284 "claimed": true, 00:13:07.284 "claim_type": "exclusive_write", 00:13:07.284 "zoned": false, 00:13:07.284 "supported_io_types": { 00:13:07.284 "read": true, 00:13:07.284 "write": true, 00:13:07.284 "unmap": true, 00:13:07.284 "flush": true, 00:13:07.284 "reset": true, 00:13:07.284 "nvme_admin": false, 00:13:07.284 "nvme_io": false, 00:13:07.284 "nvme_io_md": false, 00:13:07.284 "write_zeroes": true, 00:13:07.284 "zcopy": true, 00:13:07.284 "get_zone_info": false, 00:13:07.284 "zone_management": false, 00:13:07.284 "zone_append": false, 00:13:07.284 "compare": false, 00:13:07.284 "compare_and_write": false, 00:13:07.284 "abort": true, 00:13:07.284 "seek_hole": false, 00:13:07.284 "seek_data": false, 00:13:07.284 "copy": true, 00:13:07.284 "nvme_iov_md": false 00:13:07.284 }, 00:13:07.284 "memory_domains": [ 00:13:07.284 { 00:13:07.284 "dma_device_id": "system", 00:13:07.284 "dma_device_type": 1 00:13:07.284 }, 00:13:07.284 { 00:13:07.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.284 "dma_device_type": 2 00:13:07.284 } 00:13:07.284 ], 00:13:07.284 "driver_specific": {} 00:13:07.284 } 00:13:07.284 ] 00:13:07.284 16:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.284 16:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:07.284 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:13:07.284 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:07.284 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:07.284 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:07.284 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:07.284 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:07.284 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.284 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.284 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.284 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.284 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.284 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:07.284 16:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.284 16:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.284 16:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.543 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.543 "name": "Existed_Raid", 00:13:07.543 "uuid": "c86b9313-9560-4a22-8403-2b4da1859141", 00:13:07.543 "strip_size_kb": 64, 00:13:07.543 "state": "online", 00:13:07.543 "raid_level": "concat", 00:13:07.543 "superblock": true, 00:13:07.543 "num_base_bdevs": 4, 00:13:07.543 "num_base_bdevs_discovered": 4, 00:13:07.543 "num_base_bdevs_operational": 4, 00:13:07.543 "base_bdevs_list": [ 00:13:07.543 { 00:13:07.543 "name": "NewBaseBdev", 00:13:07.543 "uuid": "001da91d-bcad-4317-b0cb-b53eedc3c770", 00:13:07.543 "is_configured": true, 00:13:07.543 "data_offset": 2048, 00:13:07.543 "data_size": 63488 00:13:07.543 }, 00:13:07.543 { 00:13:07.543 "name": "BaseBdev2", 00:13:07.543 "uuid": "0e47a36f-f2cd-406f-a91c-1e6ae24fd7ba", 00:13:07.543 "is_configured": true, 00:13:07.543 "data_offset": 2048, 00:13:07.543 "data_size": 63488 00:13:07.543 }, 00:13:07.543 { 00:13:07.543 "name": "BaseBdev3", 00:13:07.543 "uuid": "378dbc6d-9fb6-4771-83b9-f581754e8c78", 00:13:07.543 "is_configured": true, 00:13:07.543 "data_offset": 2048, 00:13:07.543 "data_size": 63488 00:13:07.543 }, 00:13:07.543 { 00:13:07.543 "name": "BaseBdev4", 00:13:07.543 "uuid": "9aa43041-aac5-4c49-bfe6-1157990d98d5", 00:13:07.543 "is_configured": true, 00:13:07.543 "data_offset": 2048, 00:13:07.543 "data_size": 63488 00:13:07.543 } 00:13:07.543 ] 00:13:07.543 }' 00:13:07.543 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.543 16:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.803 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:07.803 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:07.803 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:07.803 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:07.803 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:07.803 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:07.803 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:07.803 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:07.803 16:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.803 16:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.803 [2024-11-05 16:27:20.794086] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:07.803 16:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.803 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:07.803 "name": "Existed_Raid", 00:13:07.803 "aliases": [ 00:13:07.803 "c86b9313-9560-4a22-8403-2b4da1859141" 00:13:07.803 ], 00:13:07.803 "product_name": "Raid Volume", 00:13:07.803 "block_size": 512, 00:13:07.803 "num_blocks": 253952, 00:13:07.803 "uuid": "c86b9313-9560-4a22-8403-2b4da1859141", 00:13:07.803 "assigned_rate_limits": { 00:13:07.803 "rw_ios_per_sec": 0, 00:13:07.803 "rw_mbytes_per_sec": 0, 00:13:07.803 "r_mbytes_per_sec": 0, 00:13:07.803 "w_mbytes_per_sec": 0 00:13:07.803 }, 00:13:07.803 "claimed": false, 00:13:07.803 "zoned": false, 00:13:07.803 "supported_io_types": { 00:13:07.803 "read": true, 00:13:07.803 "write": true, 00:13:07.803 "unmap": true, 00:13:07.803 "flush": true, 00:13:07.803 "reset": true, 00:13:07.803 "nvme_admin": false, 00:13:07.803 "nvme_io": false, 00:13:07.803 "nvme_io_md": false, 00:13:07.803 "write_zeroes": true, 00:13:07.803 "zcopy": false, 00:13:07.803 "get_zone_info": false, 00:13:07.803 "zone_management": false, 00:13:07.803 "zone_append": false, 00:13:07.803 "compare": false, 00:13:07.803 "compare_and_write": false, 00:13:07.803 "abort": false, 00:13:07.803 "seek_hole": false, 00:13:07.803 "seek_data": false, 00:13:07.803 "copy": false, 00:13:07.803 "nvme_iov_md": false 00:13:07.803 }, 00:13:07.803 "memory_domains": [ 00:13:07.803 { 00:13:07.803 "dma_device_id": "system", 00:13:07.803 "dma_device_type": 1 00:13:07.803 }, 00:13:07.803 { 00:13:07.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.803 "dma_device_type": 2 00:13:07.803 }, 00:13:07.803 { 00:13:07.803 "dma_device_id": "system", 00:13:07.803 "dma_device_type": 1 00:13:07.803 }, 00:13:07.803 { 00:13:07.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.803 "dma_device_type": 2 00:13:07.803 }, 00:13:07.803 { 00:13:07.803 "dma_device_id": "system", 00:13:07.803 "dma_device_type": 1 00:13:07.803 }, 00:13:07.803 { 00:13:07.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.803 "dma_device_type": 2 00:13:07.803 }, 00:13:07.803 { 00:13:07.803 "dma_device_id": "system", 00:13:07.803 "dma_device_type": 1 00:13:07.803 }, 00:13:07.803 { 00:13:07.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.803 "dma_device_type": 2 00:13:07.803 } 00:13:07.803 ], 00:13:07.803 "driver_specific": { 00:13:07.803 "raid": { 00:13:07.803 "uuid": "c86b9313-9560-4a22-8403-2b4da1859141", 00:13:07.803 "strip_size_kb": 64, 00:13:07.803 "state": "online", 00:13:07.803 "raid_level": "concat", 00:13:07.803 "superblock": true, 00:13:07.803 "num_base_bdevs": 4, 00:13:07.803 "num_base_bdevs_discovered": 4, 00:13:07.803 "num_base_bdevs_operational": 4, 00:13:07.803 "base_bdevs_list": [ 00:13:07.803 { 00:13:07.803 "name": "NewBaseBdev", 00:13:07.803 "uuid": "001da91d-bcad-4317-b0cb-b53eedc3c770", 00:13:07.803 "is_configured": true, 00:13:07.803 "data_offset": 2048, 00:13:07.803 "data_size": 63488 00:13:07.803 }, 00:13:07.803 { 00:13:07.803 "name": "BaseBdev2", 00:13:07.803 "uuid": "0e47a36f-f2cd-406f-a91c-1e6ae24fd7ba", 00:13:07.803 "is_configured": true, 00:13:07.803 "data_offset": 2048, 00:13:07.803 "data_size": 63488 00:13:07.803 }, 00:13:07.803 { 00:13:07.803 "name": "BaseBdev3", 00:13:07.803 "uuid": "378dbc6d-9fb6-4771-83b9-f581754e8c78", 00:13:07.803 "is_configured": true, 00:13:07.803 "data_offset": 2048, 00:13:07.803 "data_size": 63488 00:13:07.803 }, 00:13:07.803 { 00:13:07.803 "name": "BaseBdev4", 00:13:07.803 "uuid": "9aa43041-aac5-4c49-bfe6-1157990d98d5", 00:13:07.803 "is_configured": true, 00:13:07.803 "data_offset": 2048, 00:13:07.803 "data_size": 63488 00:13:07.803 } 00:13:07.803 ] 00:13:07.803 } 00:13:07.803 } 00:13:07.803 }' 00:13:07.803 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:07.803 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:07.803 BaseBdev2 00:13:07.803 BaseBdev3 00:13:07.803 BaseBdev4' 00:13:07.803 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.063 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:08.063 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:08.063 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:08.063 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.063 16:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.063 16:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.063 16:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.063 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:08.063 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:08.063 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:08.063 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:08.063 16:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.063 16:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.063 16:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.063 16:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.063 16:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:08.063 16:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:08.063 16:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:08.063 16:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:08.063 16:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.063 16:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.063 16:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.063 16:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.063 16:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:08.063 16:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:08.063 16:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:08.063 16:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:08.063 16:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.063 16:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.063 16:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.063 16:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.063 16:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:08.063 16:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:08.063 16:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:08.063 16:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.063 16:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.063 [2024-11-05 16:27:21.145045] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:08.063 [2024-11-05 16:27:21.145190] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:08.063 [2024-11-05 16:27:21.145317] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:08.063 [2024-11-05 16:27:21.145410] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:08.063 [2024-11-05 16:27:21.145423] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:08.063 16:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.063 16:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72263 00:13:08.063 16:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 72263 ']' 00:13:08.063 16:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 72263 00:13:08.322 16:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:13:08.322 16:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:08.322 16:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72263 00:13:08.322 16:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:08.322 16:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:08.322 killing process with pid 72263 00:13:08.322 16:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72263' 00:13:08.322 16:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 72263 00:13:08.322 [2024-11-05 16:27:21.192386] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:08.322 16:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 72263 00:13:08.890 [2024-11-05 16:27:21.710414] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:10.269 16:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:10.269 00:13:10.269 real 0m12.713s 00:13:10.269 user 0m19.907s 00:13:10.269 sys 0m2.226s 00:13:10.269 ************************************ 00:13:10.269 END TEST raid_state_function_test_sb 00:13:10.269 ************************************ 00:13:10.269 16:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:10.269 16:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.269 16:27:23 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:13:10.269 16:27:23 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:10.269 16:27:23 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:10.269 16:27:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:10.269 ************************************ 00:13:10.269 START TEST raid_superblock_test 00:13:10.269 ************************************ 00:13:10.269 16:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 4 00:13:10.269 16:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:13:10.269 16:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:13:10.269 16:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:10.269 16:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:10.269 16:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:10.269 16:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:10.269 16:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:10.269 16:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:10.269 16:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:10.269 16:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:10.269 16:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:10.269 16:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:10.269 16:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:10.269 16:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:13:10.269 16:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:10.269 16:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:10.269 16:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72939 00:13:10.269 16:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:10.269 16:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72939 00:13:10.269 16:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 72939 ']' 00:13:10.269 16:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.269 16:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:10.269 16:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.269 16:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:10.269 16:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.269 [2024-11-05 16:27:23.300819] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:13:10.269 [2024-11-05 16:27:23.301081] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72939 ] 00:13:10.528 [2024-11-05 16:27:23.480716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.787 [2024-11-05 16:27:23.631746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.047 [2024-11-05 16:27:23.895092] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:11.047 [2024-11-05 16:27:23.895187] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.308 malloc1 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.308 [2024-11-05 16:27:24.253312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:11.308 [2024-11-05 16:27:24.253528] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.308 [2024-11-05 16:27:24.253588] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:11.308 [2024-11-05 16:27:24.253663] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.308 [2024-11-05 16:27:24.256457] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.308 [2024-11-05 16:27:24.256591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:11.308 pt1 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.308 malloc2 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.308 [2024-11-05 16:27:24.324687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:11.308 [2024-11-05 16:27:24.324849] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.308 [2024-11-05 16:27:24.324884] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:11.308 [2024-11-05 16:27:24.324895] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.308 [2024-11-05 16:27:24.327633] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.308 [2024-11-05 16:27:24.327675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:11.308 pt2 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.308 16:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.568 malloc3 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.568 [2024-11-05 16:27:24.410825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:11.568 [2024-11-05 16:27:24.410999] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.568 [2024-11-05 16:27:24.411049] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:11.568 [2024-11-05 16:27:24.411086] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.568 [2024-11-05 16:27:24.413891] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.568 [2024-11-05 16:27:24.413977] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:11.568 pt3 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.568 malloc4 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.568 [2024-11-05 16:27:24.481857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:11.568 [2024-11-05 16:27:24.482016] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.568 [2024-11-05 16:27:24.482060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:11.568 [2024-11-05 16:27:24.482113] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.568 [2024-11-05 16:27:24.484888] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.568 [2024-11-05 16:27:24.484977] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:11.568 pt4 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.568 [2024-11-05 16:27:24.493915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:11.568 [2024-11-05 16:27:24.496268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:11.568 [2024-11-05 16:27:24.496392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:11.568 [2024-11-05 16:27:24.496499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:11.568 [2024-11-05 16:27:24.496787] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:11.568 [2024-11-05 16:27:24.496837] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:11.568 [2024-11-05 16:27:24.497182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:11.568 [2024-11-05 16:27:24.497403] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:11.568 [2024-11-05 16:27:24.497419] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:11.568 [2024-11-05 16:27:24.497631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.568 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.568 "name": "raid_bdev1", 00:13:11.568 "uuid": "e67d969d-6419-4518-ae36-ecf18b9fb1a7", 00:13:11.568 "strip_size_kb": 64, 00:13:11.568 "state": "online", 00:13:11.568 "raid_level": "concat", 00:13:11.568 "superblock": true, 00:13:11.568 "num_base_bdevs": 4, 00:13:11.568 "num_base_bdevs_discovered": 4, 00:13:11.568 "num_base_bdevs_operational": 4, 00:13:11.568 "base_bdevs_list": [ 00:13:11.568 { 00:13:11.568 "name": "pt1", 00:13:11.568 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:11.568 "is_configured": true, 00:13:11.568 "data_offset": 2048, 00:13:11.568 "data_size": 63488 00:13:11.568 }, 00:13:11.568 { 00:13:11.569 "name": "pt2", 00:13:11.569 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:11.569 "is_configured": true, 00:13:11.569 "data_offset": 2048, 00:13:11.569 "data_size": 63488 00:13:11.569 }, 00:13:11.569 { 00:13:11.569 "name": "pt3", 00:13:11.569 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:11.569 "is_configured": true, 00:13:11.569 "data_offset": 2048, 00:13:11.569 "data_size": 63488 00:13:11.569 }, 00:13:11.569 { 00:13:11.569 "name": "pt4", 00:13:11.569 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:11.569 "is_configured": true, 00:13:11.569 "data_offset": 2048, 00:13:11.569 "data_size": 63488 00:13:11.569 } 00:13:11.569 ] 00:13:11.569 }' 00:13:11.569 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.569 16:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.138 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:12.138 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:12.138 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:12.138 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:12.138 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:12.138 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:12.138 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:12.138 16:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.138 16:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.138 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:12.138 [2024-11-05 16:27:24.949625] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:12.138 16:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.138 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:12.138 "name": "raid_bdev1", 00:13:12.138 "aliases": [ 00:13:12.138 "e67d969d-6419-4518-ae36-ecf18b9fb1a7" 00:13:12.138 ], 00:13:12.138 "product_name": "Raid Volume", 00:13:12.138 "block_size": 512, 00:13:12.138 "num_blocks": 253952, 00:13:12.138 "uuid": "e67d969d-6419-4518-ae36-ecf18b9fb1a7", 00:13:12.138 "assigned_rate_limits": { 00:13:12.138 "rw_ios_per_sec": 0, 00:13:12.138 "rw_mbytes_per_sec": 0, 00:13:12.138 "r_mbytes_per_sec": 0, 00:13:12.138 "w_mbytes_per_sec": 0 00:13:12.138 }, 00:13:12.138 "claimed": false, 00:13:12.138 "zoned": false, 00:13:12.138 "supported_io_types": { 00:13:12.138 "read": true, 00:13:12.138 "write": true, 00:13:12.138 "unmap": true, 00:13:12.138 "flush": true, 00:13:12.138 "reset": true, 00:13:12.138 "nvme_admin": false, 00:13:12.138 "nvme_io": false, 00:13:12.138 "nvme_io_md": false, 00:13:12.138 "write_zeroes": true, 00:13:12.138 "zcopy": false, 00:13:12.138 "get_zone_info": false, 00:13:12.138 "zone_management": false, 00:13:12.138 "zone_append": false, 00:13:12.138 "compare": false, 00:13:12.138 "compare_and_write": false, 00:13:12.138 "abort": false, 00:13:12.138 "seek_hole": false, 00:13:12.138 "seek_data": false, 00:13:12.138 "copy": false, 00:13:12.138 "nvme_iov_md": false 00:13:12.138 }, 00:13:12.138 "memory_domains": [ 00:13:12.138 { 00:13:12.138 "dma_device_id": "system", 00:13:12.138 "dma_device_type": 1 00:13:12.138 }, 00:13:12.138 { 00:13:12.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.138 "dma_device_type": 2 00:13:12.138 }, 00:13:12.138 { 00:13:12.138 "dma_device_id": "system", 00:13:12.138 "dma_device_type": 1 00:13:12.138 }, 00:13:12.138 { 00:13:12.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.138 "dma_device_type": 2 00:13:12.138 }, 00:13:12.138 { 00:13:12.138 "dma_device_id": "system", 00:13:12.138 "dma_device_type": 1 00:13:12.138 }, 00:13:12.138 { 00:13:12.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.138 "dma_device_type": 2 00:13:12.138 }, 00:13:12.138 { 00:13:12.138 "dma_device_id": "system", 00:13:12.138 "dma_device_type": 1 00:13:12.138 }, 00:13:12.138 { 00:13:12.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.138 "dma_device_type": 2 00:13:12.138 } 00:13:12.138 ], 00:13:12.138 "driver_specific": { 00:13:12.138 "raid": { 00:13:12.138 "uuid": "e67d969d-6419-4518-ae36-ecf18b9fb1a7", 00:13:12.138 "strip_size_kb": 64, 00:13:12.138 "state": "online", 00:13:12.138 "raid_level": "concat", 00:13:12.138 "superblock": true, 00:13:12.138 "num_base_bdevs": 4, 00:13:12.138 "num_base_bdevs_discovered": 4, 00:13:12.138 "num_base_bdevs_operational": 4, 00:13:12.138 "base_bdevs_list": [ 00:13:12.138 { 00:13:12.138 "name": "pt1", 00:13:12.138 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:12.138 "is_configured": true, 00:13:12.138 "data_offset": 2048, 00:13:12.138 "data_size": 63488 00:13:12.138 }, 00:13:12.138 { 00:13:12.138 "name": "pt2", 00:13:12.138 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:12.138 "is_configured": true, 00:13:12.138 "data_offset": 2048, 00:13:12.138 "data_size": 63488 00:13:12.138 }, 00:13:12.138 { 00:13:12.138 "name": "pt3", 00:13:12.138 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:12.138 "is_configured": true, 00:13:12.138 "data_offset": 2048, 00:13:12.138 "data_size": 63488 00:13:12.138 }, 00:13:12.138 { 00:13:12.138 "name": "pt4", 00:13:12.138 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:12.138 "is_configured": true, 00:13:12.138 "data_offset": 2048, 00:13:12.138 "data_size": 63488 00:13:12.138 } 00:13:12.138 ] 00:13:12.138 } 00:13:12.138 } 00:13:12.138 }' 00:13:12.138 16:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:12.138 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:12.138 pt2 00:13:12.138 pt3 00:13:12.138 pt4' 00:13:12.138 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:12.138 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:12.138 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:12.138 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:12.138 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:12.138 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.138 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.138 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.138 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:12.138 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:12.138 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:12.138 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:12.138 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.138 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.138 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:12.138 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.138 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:12.138 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:12.138 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:12.138 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:12.138 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:12.138 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.138 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.138 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.398 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:12.398 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:12.398 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:12.398 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:12.398 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:12.398 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.398 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.398 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.398 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:12.398 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:12.398 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:12.398 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.398 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.398 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:12.398 [2024-11-05 16:27:25.293072] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:12.398 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.398 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e67d969d-6419-4518-ae36-ecf18b9fb1a7 00:13:12.398 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e67d969d-6419-4518-ae36-ecf18b9fb1a7 ']' 00:13:12.398 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:12.398 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.398 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.398 [2024-11-05 16:27:25.340709] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:12.398 [2024-11-05 16:27:25.340759] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:12.398 [2024-11-05 16:27:25.340876] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:12.398 [2024-11-05 16:27:25.340964] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:12.398 [2024-11-05 16:27:25.340983] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:12.398 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.398 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:12.398 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.398 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.398 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.398 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.398 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:12.398 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:12.398 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:12.398 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:12.398 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.398 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.398 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.398 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:12.398 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:12.398 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.398 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.398 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.398 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:12.399 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:12.399 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.399 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.399 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.399 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:12.399 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:13:12.399 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.399 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.399 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.399 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:12.399 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.399 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.399 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:12.399 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.659 [2024-11-05 16:27:25.512771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:12.659 [2024-11-05 16:27:25.515550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:12.659 [2024-11-05 16:27:25.515663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:12.659 [2024-11-05 16:27:25.515728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:12.659 [2024-11-05 16:27:25.515831] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:12.659 [2024-11-05 16:27:25.515977] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:12.659 [2024-11-05 16:27:25.516059] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:12.659 [2024-11-05 16:27:25.516165] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:12.659 [2024-11-05 16:27:25.516224] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:12.659 [2024-11-05 16:27:25.516270] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:12.659 request: 00:13:12.659 { 00:13:12.659 "name": "raid_bdev1", 00:13:12.659 "raid_level": "concat", 00:13:12.659 "base_bdevs": [ 00:13:12.659 "malloc1", 00:13:12.659 "malloc2", 00:13:12.659 "malloc3", 00:13:12.659 "malloc4" 00:13:12.659 ], 00:13:12.659 "strip_size_kb": 64, 00:13:12.659 "superblock": false, 00:13:12.659 "method": "bdev_raid_create", 00:13:12.659 "req_id": 1 00:13:12.659 } 00:13:12.659 Got JSON-RPC error response 00:13:12.659 response: 00:13:12.659 { 00:13:12.659 "code": -17, 00:13:12.659 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:12.659 } 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.659 [2024-11-05 16:27:25.584813] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:12.659 [2024-11-05 16:27:25.584920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.659 [2024-11-05 16:27:25.584945] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:12.659 [2024-11-05 16:27:25.584959] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.659 [2024-11-05 16:27:25.587754] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.659 [2024-11-05 16:27:25.587799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:12.659 [2024-11-05 16:27:25.587918] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:12.659 [2024-11-05 16:27:25.588000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:12.659 pt1 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.659 "name": "raid_bdev1", 00:13:12.659 "uuid": "e67d969d-6419-4518-ae36-ecf18b9fb1a7", 00:13:12.659 "strip_size_kb": 64, 00:13:12.659 "state": "configuring", 00:13:12.659 "raid_level": "concat", 00:13:12.659 "superblock": true, 00:13:12.659 "num_base_bdevs": 4, 00:13:12.659 "num_base_bdevs_discovered": 1, 00:13:12.659 "num_base_bdevs_operational": 4, 00:13:12.659 "base_bdevs_list": [ 00:13:12.659 { 00:13:12.659 "name": "pt1", 00:13:12.659 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:12.659 "is_configured": true, 00:13:12.659 "data_offset": 2048, 00:13:12.659 "data_size": 63488 00:13:12.659 }, 00:13:12.659 { 00:13:12.659 "name": null, 00:13:12.659 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:12.659 "is_configured": false, 00:13:12.659 "data_offset": 2048, 00:13:12.659 "data_size": 63488 00:13:12.659 }, 00:13:12.659 { 00:13:12.659 "name": null, 00:13:12.659 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:12.659 "is_configured": false, 00:13:12.659 "data_offset": 2048, 00:13:12.659 "data_size": 63488 00:13:12.659 }, 00:13:12.659 { 00:13:12.659 "name": null, 00:13:12.659 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:12.659 "is_configured": false, 00:13:12.659 "data_offset": 2048, 00:13:12.659 "data_size": 63488 00:13:12.659 } 00:13:12.659 ] 00:13:12.659 }' 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.659 16:27:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.919 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:13:12.919 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:12.919 16:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.919 16:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.179 [2024-11-05 16:27:26.016139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:13.179 [2024-11-05 16:27:26.016357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.179 [2024-11-05 16:27:26.016413] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:13.179 [2024-11-05 16:27:26.016453] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.179 [2024-11-05 16:27:26.017138] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.179 [2024-11-05 16:27:26.017228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:13.179 [2024-11-05 16:27:26.017387] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:13.179 [2024-11-05 16:27:26.017451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:13.179 pt2 00:13:13.179 16:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.179 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:13.179 16:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.179 16:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.179 [2024-11-05 16:27:26.028095] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:13.179 16:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.179 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:13:13.179 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.179 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:13.179 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:13.179 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:13.179 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:13.179 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.179 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.179 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.179 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.179 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.179 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.179 16:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.179 16:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.179 16:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.180 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.180 "name": "raid_bdev1", 00:13:13.180 "uuid": "e67d969d-6419-4518-ae36-ecf18b9fb1a7", 00:13:13.180 "strip_size_kb": 64, 00:13:13.180 "state": "configuring", 00:13:13.180 "raid_level": "concat", 00:13:13.180 "superblock": true, 00:13:13.180 "num_base_bdevs": 4, 00:13:13.180 "num_base_bdevs_discovered": 1, 00:13:13.180 "num_base_bdevs_operational": 4, 00:13:13.180 "base_bdevs_list": [ 00:13:13.180 { 00:13:13.180 "name": "pt1", 00:13:13.180 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:13.180 "is_configured": true, 00:13:13.180 "data_offset": 2048, 00:13:13.180 "data_size": 63488 00:13:13.180 }, 00:13:13.180 { 00:13:13.180 "name": null, 00:13:13.180 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:13.180 "is_configured": false, 00:13:13.180 "data_offset": 0, 00:13:13.180 "data_size": 63488 00:13:13.180 }, 00:13:13.180 { 00:13:13.180 "name": null, 00:13:13.180 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:13.180 "is_configured": false, 00:13:13.180 "data_offset": 2048, 00:13:13.180 "data_size": 63488 00:13:13.180 }, 00:13:13.180 { 00:13:13.180 "name": null, 00:13:13.180 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:13.180 "is_configured": false, 00:13:13.180 "data_offset": 2048, 00:13:13.180 "data_size": 63488 00:13:13.180 } 00:13:13.180 ] 00:13:13.180 }' 00:13:13.180 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.180 16:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.440 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:13.440 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:13.440 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:13.440 16:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.440 16:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.440 [2024-11-05 16:27:26.443406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:13.440 [2024-11-05 16:27:26.443510] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.440 [2024-11-05 16:27:26.443547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:13.440 [2024-11-05 16:27:26.443558] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.440 [2024-11-05 16:27:26.444122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.440 [2024-11-05 16:27:26.444142] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:13.440 [2024-11-05 16:27:26.444273] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:13.440 [2024-11-05 16:27:26.444300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:13.440 pt2 00:13:13.440 16:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.440 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:13.440 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:13.440 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:13.440 16:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.440 16:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.440 [2024-11-05 16:27:26.455313] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:13.440 [2024-11-05 16:27:26.455380] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.440 [2024-11-05 16:27:26.455408] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:13.440 [2024-11-05 16:27:26.455420] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.440 [2024-11-05 16:27:26.455876] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.440 [2024-11-05 16:27:26.455894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:13.440 [2024-11-05 16:27:26.455974] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:13.440 [2024-11-05 16:27:26.455994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:13.440 pt3 00:13:13.440 16:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.440 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:13.440 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:13.440 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:13.440 16:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.441 16:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.441 [2024-11-05 16:27:26.467255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:13.441 [2024-11-05 16:27:26.467312] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.441 [2024-11-05 16:27:26.467333] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:13.441 [2024-11-05 16:27:26.467342] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.441 [2024-11-05 16:27:26.467822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.441 [2024-11-05 16:27:26.467839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:13.441 [2024-11-05 16:27:26.467913] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:13.441 [2024-11-05 16:27:26.467934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:13.441 [2024-11-05 16:27:26.468085] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:13.441 [2024-11-05 16:27:26.468101] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:13.441 [2024-11-05 16:27:26.468379] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:13.441 [2024-11-05 16:27:26.468597] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:13.441 [2024-11-05 16:27:26.468614] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:13.441 [2024-11-05 16:27:26.468786] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.441 pt4 00:13:13.441 16:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.441 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:13.441 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:13.441 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:13.441 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.441 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.441 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:13.441 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:13.441 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:13.441 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.441 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.441 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.441 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.441 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.441 16:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.441 16:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.441 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.441 16:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.441 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.441 "name": "raid_bdev1", 00:13:13.441 "uuid": "e67d969d-6419-4518-ae36-ecf18b9fb1a7", 00:13:13.441 "strip_size_kb": 64, 00:13:13.441 "state": "online", 00:13:13.441 "raid_level": "concat", 00:13:13.441 "superblock": true, 00:13:13.441 "num_base_bdevs": 4, 00:13:13.441 "num_base_bdevs_discovered": 4, 00:13:13.441 "num_base_bdevs_operational": 4, 00:13:13.441 "base_bdevs_list": [ 00:13:13.441 { 00:13:13.441 "name": "pt1", 00:13:13.441 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:13.441 "is_configured": true, 00:13:13.441 "data_offset": 2048, 00:13:13.441 "data_size": 63488 00:13:13.441 }, 00:13:13.441 { 00:13:13.441 "name": "pt2", 00:13:13.441 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:13.441 "is_configured": true, 00:13:13.441 "data_offset": 2048, 00:13:13.441 "data_size": 63488 00:13:13.441 }, 00:13:13.441 { 00:13:13.441 "name": "pt3", 00:13:13.441 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:13.441 "is_configured": true, 00:13:13.441 "data_offset": 2048, 00:13:13.441 "data_size": 63488 00:13:13.441 }, 00:13:13.441 { 00:13:13.441 "name": "pt4", 00:13:13.441 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:13.441 "is_configured": true, 00:13:13.441 "data_offset": 2048, 00:13:13.441 "data_size": 63488 00:13:13.441 } 00:13:13.441 ] 00:13:13.441 }' 00:13:13.441 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.441 16:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.009 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:14.009 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:14.009 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:14.009 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:14.009 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:14.009 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:14.009 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:14.009 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:14.009 16:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.009 16:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.009 [2024-11-05 16:27:26.947016] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:14.009 16:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.009 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:14.009 "name": "raid_bdev1", 00:13:14.009 "aliases": [ 00:13:14.009 "e67d969d-6419-4518-ae36-ecf18b9fb1a7" 00:13:14.009 ], 00:13:14.009 "product_name": "Raid Volume", 00:13:14.009 "block_size": 512, 00:13:14.009 "num_blocks": 253952, 00:13:14.009 "uuid": "e67d969d-6419-4518-ae36-ecf18b9fb1a7", 00:13:14.009 "assigned_rate_limits": { 00:13:14.009 "rw_ios_per_sec": 0, 00:13:14.009 "rw_mbytes_per_sec": 0, 00:13:14.009 "r_mbytes_per_sec": 0, 00:13:14.009 "w_mbytes_per_sec": 0 00:13:14.009 }, 00:13:14.009 "claimed": false, 00:13:14.009 "zoned": false, 00:13:14.009 "supported_io_types": { 00:13:14.009 "read": true, 00:13:14.009 "write": true, 00:13:14.009 "unmap": true, 00:13:14.009 "flush": true, 00:13:14.009 "reset": true, 00:13:14.009 "nvme_admin": false, 00:13:14.009 "nvme_io": false, 00:13:14.009 "nvme_io_md": false, 00:13:14.009 "write_zeroes": true, 00:13:14.009 "zcopy": false, 00:13:14.010 "get_zone_info": false, 00:13:14.010 "zone_management": false, 00:13:14.010 "zone_append": false, 00:13:14.010 "compare": false, 00:13:14.010 "compare_and_write": false, 00:13:14.010 "abort": false, 00:13:14.010 "seek_hole": false, 00:13:14.010 "seek_data": false, 00:13:14.010 "copy": false, 00:13:14.010 "nvme_iov_md": false 00:13:14.010 }, 00:13:14.010 "memory_domains": [ 00:13:14.010 { 00:13:14.010 "dma_device_id": "system", 00:13:14.010 "dma_device_type": 1 00:13:14.010 }, 00:13:14.010 { 00:13:14.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.010 "dma_device_type": 2 00:13:14.010 }, 00:13:14.010 { 00:13:14.010 "dma_device_id": "system", 00:13:14.010 "dma_device_type": 1 00:13:14.010 }, 00:13:14.010 { 00:13:14.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.010 "dma_device_type": 2 00:13:14.010 }, 00:13:14.010 { 00:13:14.010 "dma_device_id": "system", 00:13:14.010 "dma_device_type": 1 00:13:14.010 }, 00:13:14.010 { 00:13:14.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.010 "dma_device_type": 2 00:13:14.010 }, 00:13:14.010 { 00:13:14.010 "dma_device_id": "system", 00:13:14.010 "dma_device_type": 1 00:13:14.010 }, 00:13:14.010 { 00:13:14.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.010 "dma_device_type": 2 00:13:14.010 } 00:13:14.010 ], 00:13:14.010 "driver_specific": { 00:13:14.010 "raid": { 00:13:14.010 "uuid": "e67d969d-6419-4518-ae36-ecf18b9fb1a7", 00:13:14.010 "strip_size_kb": 64, 00:13:14.010 "state": "online", 00:13:14.010 "raid_level": "concat", 00:13:14.010 "superblock": true, 00:13:14.010 "num_base_bdevs": 4, 00:13:14.010 "num_base_bdevs_discovered": 4, 00:13:14.010 "num_base_bdevs_operational": 4, 00:13:14.010 "base_bdevs_list": [ 00:13:14.010 { 00:13:14.010 "name": "pt1", 00:13:14.010 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:14.010 "is_configured": true, 00:13:14.010 "data_offset": 2048, 00:13:14.010 "data_size": 63488 00:13:14.010 }, 00:13:14.010 { 00:13:14.010 "name": "pt2", 00:13:14.010 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:14.010 "is_configured": true, 00:13:14.010 "data_offset": 2048, 00:13:14.010 "data_size": 63488 00:13:14.010 }, 00:13:14.010 { 00:13:14.010 "name": "pt3", 00:13:14.010 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:14.010 "is_configured": true, 00:13:14.010 "data_offset": 2048, 00:13:14.010 "data_size": 63488 00:13:14.010 }, 00:13:14.010 { 00:13:14.010 "name": "pt4", 00:13:14.010 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:14.010 "is_configured": true, 00:13:14.010 "data_offset": 2048, 00:13:14.010 "data_size": 63488 00:13:14.010 } 00:13:14.010 ] 00:13:14.010 } 00:13:14.010 } 00:13:14.010 }' 00:13:14.010 16:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:14.010 16:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:14.010 pt2 00:13:14.010 pt3 00:13:14.010 pt4' 00:13:14.010 16:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:14.010 16:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:14.010 16:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:14.010 16:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:14.010 16:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:14.010 16:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.010 16:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.010 16:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.269 16:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:14.270 16:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:14.270 16:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:14.270 16:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:14.270 16:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.270 16:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.270 16:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:14.270 16:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.270 16:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:14.270 16:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:14.270 16:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:14.270 16:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:14.270 16:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.270 16:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:14.270 16:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.270 16:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.270 16:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:14.270 16:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:14.270 16:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:14.270 16:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:14.270 16:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:14.270 16:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.270 16:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.270 16:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.270 16:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:14.270 16:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:14.270 16:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:14.270 16:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:14.270 16:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.270 16:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.270 [2024-11-05 16:27:27.282291] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:14.270 16:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.270 16:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e67d969d-6419-4518-ae36-ecf18b9fb1a7 '!=' e67d969d-6419-4518-ae36-ecf18b9fb1a7 ']' 00:13:14.270 16:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:13:14.270 16:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:14.270 16:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:14.270 16:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72939 00:13:14.270 16:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 72939 ']' 00:13:14.270 16:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 72939 00:13:14.270 16:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:13:14.270 16:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:14.270 16:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72939 00:13:14.530 16:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:14.530 16:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:14.530 16:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72939' 00:13:14.530 killing process with pid 72939 00:13:14.530 16:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 72939 00:13:14.530 [2024-11-05 16:27:27.367234] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:14.530 16:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 72939 00:13:14.530 [2024-11-05 16:27:27.367477] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:14.530 [2024-11-05 16:27:27.367597] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:14.530 [2024-11-05 16:27:27.367611] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:14.789 [2024-11-05 16:27:27.876233] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:16.196 16:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:16.196 00:13:16.196 real 0m6.079s 00:13:16.196 user 0m8.443s 00:13:16.196 sys 0m1.046s 00:13:16.196 16:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:16.196 ************************************ 00:13:16.196 END TEST raid_superblock_test 00:13:16.196 ************************************ 00:13:16.196 16:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.455 16:27:29 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:13:16.455 16:27:29 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:16.455 16:27:29 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:16.455 16:27:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:16.455 ************************************ 00:13:16.455 START TEST raid_read_error_test 00:13:16.455 ************************************ 00:13:16.455 16:27:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 4 read 00:13:16.455 16:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:16.455 16:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:16.455 16:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:16.455 16:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:16.455 16:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:16.455 16:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:16.455 16:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:16.455 16:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:16.455 16:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:16.455 16:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:16.455 16:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:16.455 16:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:16.455 16:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:16.455 16:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:16.455 16:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:16.455 16:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:16.455 16:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:16.455 16:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:16.455 16:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:16.455 16:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:16.455 16:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:16.455 16:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:16.455 16:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:16.455 16:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:16.455 16:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:16.455 16:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:16.455 16:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:16.455 16:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:16.455 16:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.s9yvE6uh3s 00:13:16.455 16:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73209 00:13:16.456 16:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73209 00:13:16.456 16:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:16.456 16:27:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 73209 ']' 00:13:16.456 16:27:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.456 16:27:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:16.456 16:27:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.456 16:27:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:16.456 16:27:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.456 [2024-11-05 16:27:29.471620] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:13:16.456 [2024-11-05 16:27:29.472287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73209 ] 00:13:16.716 [2024-11-05 16:27:29.649542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.716 [2024-11-05 16:27:29.798705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.978 [2024-11-05 16:27:30.047118] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:16.978 [2024-11-05 16:27:30.047184] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:17.547 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:17.547 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:13:17.547 16:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:17.547 16:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:17.547 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.547 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.547 BaseBdev1_malloc 00:13:17.547 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.547 16:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:17.547 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.547 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.547 true 00:13:17.547 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.547 16:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:17.547 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.547 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.547 [2024-11-05 16:27:30.425103] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:17.547 [2024-11-05 16:27:30.425293] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.547 [2024-11-05 16:27:30.425347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:17.547 [2024-11-05 16:27:30.425388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.547 [2024-11-05 16:27:30.428386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.548 [2024-11-05 16:27:30.428530] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:17.548 BaseBdev1 00:13:17.548 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.548 16:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:17.548 16:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:17.548 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.548 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.548 BaseBdev2_malloc 00:13:17.548 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.548 16:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:17.548 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.548 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.548 true 00:13:17.548 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.548 16:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:17.548 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.548 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.548 [2024-11-05 16:27:30.502272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:17.548 [2024-11-05 16:27:30.502379] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.548 [2024-11-05 16:27:30.502407] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:17.548 [2024-11-05 16:27:30.502422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.548 [2024-11-05 16:27:30.505363] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.548 [2024-11-05 16:27:30.505533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:17.548 BaseBdev2 00:13:17.548 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.548 16:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:17.548 16:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:17.548 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.548 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.548 BaseBdev3_malloc 00:13:17.548 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.548 16:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:17.548 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.548 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.548 true 00:13:17.548 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.548 16:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:17.548 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.548 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.548 [2024-11-05 16:27:30.594391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:17.548 [2024-11-05 16:27:30.594486] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.548 [2024-11-05 16:27:30.594512] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:17.548 [2024-11-05 16:27:30.594543] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.548 [2024-11-05 16:27:30.597322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.548 [2024-11-05 16:27:30.597470] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:17.548 BaseBdev3 00:13:17.548 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.548 16:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:17.548 16:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:17.548 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.548 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.807 BaseBdev4_malloc 00:13:17.807 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.807 16:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:17.807 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.807 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.807 true 00:13:17.807 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.807 16:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:17.807 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.807 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.807 [2024-11-05 16:27:30.673594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:17.807 [2024-11-05 16:27:30.673799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.807 [2024-11-05 16:27:30.673837] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:17.807 [2024-11-05 16:27:30.673853] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.807 [2024-11-05 16:27:30.676800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.807 [2024-11-05 16:27:30.676861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:17.807 BaseBdev4 00:13:17.807 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.807 16:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:17.807 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.807 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.807 [2024-11-05 16:27:30.685679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:17.807 [2024-11-05 16:27:30.688170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:17.808 [2024-11-05 16:27:30.688386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:17.808 [2024-11-05 16:27:30.688482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:17.808 [2024-11-05 16:27:30.688814] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:17.808 [2024-11-05 16:27:30.688832] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:17.808 [2024-11-05 16:27:30.689212] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:17.808 [2024-11-05 16:27:30.689425] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:17.808 [2024-11-05 16:27:30.689438] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:17.808 [2024-11-05 16:27:30.689782] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.808 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.808 16:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:17.808 16:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:17.808 16:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.808 16:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:17.808 16:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:17.808 16:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:17.808 16:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.808 16:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.808 16:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.808 16:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.808 16:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.808 16:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.808 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.808 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.808 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.808 16:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.808 "name": "raid_bdev1", 00:13:17.808 "uuid": "70de456d-3ff8-4cff-90ad-6a5c2515629f", 00:13:17.808 "strip_size_kb": 64, 00:13:17.808 "state": "online", 00:13:17.808 "raid_level": "concat", 00:13:17.808 "superblock": true, 00:13:17.808 "num_base_bdevs": 4, 00:13:17.808 "num_base_bdevs_discovered": 4, 00:13:17.808 "num_base_bdevs_operational": 4, 00:13:17.808 "base_bdevs_list": [ 00:13:17.808 { 00:13:17.808 "name": "BaseBdev1", 00:13:17.808 "uuid": "c3c9736e-6bae-5f1d-9808-488456b3c896", 00:13:17.808 "is_configured": true, 00:13:17.808 "data_offset": 2048, 00:13:17.808 "data_size": 63488 00:13:17.808 }, 00:13:17.808 { 00:13:17.808 "name": "BaseBdev2", 00:13:17.808 "uuid": "d292316c-2c98-5884-861d-734c52d500fd", 00:13:17.808 "is_configured": true, 00:13:17.808 "data_offset": 2048, 00:13:17.808 "data_size": 63488 00:13:17.808 }, 00:13:17.808 { 00:13:17.808 "name": "BaseBdev3", 00:13:17.808 "uuid": "f9ef7e95-ff5a-5af1-8af4-b652d394f0cc", 00:13:17.808 "is_configured": true, 00:13:17.808 "data_offset": 2048, 00:13:17.808 "data_size": 63488 00:13:17.808 }, 00:13:17.808 { 00:13:17.808 "name": "BaseBdev4", 00:13:17.808 "uuid": "a7654a49-97b1-525f-b09b-97f0983d5e1f", 00:13:17.808 "is_configured": true, 00:13:17.808 "data_offset": 2048, 00:13:17.808 "data_size": 63488 00:13:17.808 } 00:13:17.808 ] 00:13:17.808 }' 00:13:17.808 16:27:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.808 16:27:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.376 16:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:18.376 16:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:18.376 [2024-11-05 16:27:31.278272] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:19.312 16:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:19.312 16:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.312 16:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.312 16:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.312 16:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:19.312 16:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:19.312 16:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:19.312 16:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:19.312 16:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:19.312 16:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.312 16:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:19.312 16:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:19.312 16:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:19.312 16:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.312 16:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.312 16:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.312 16:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.312 16:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.312 16:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.312 16:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.312 16:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.312 16:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.312 16:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.312 "name": "raid_bdev1", 00:13:19.312 "uuid": "70de456d-3ff8-4cff-90ad-6a5c2515629f", 00:13:19.312 "strip_size_kb": 64, 00:13:19.312 "state": "online", 00:13:19.312 "raid_level": "concat", 00:13:19.312 "superblock": true, 00:13:19.312 "num_base_bdevs": 4, 00:13:19.312 "num_base_bdevs_discovered": 4, 00:13:19.312 "num_base_bdevs_operational": 4, 00:13:19.312 "base_bdevs_list": [ 00:13:19.312 { 00:13:19.312 "name": "BaseBdev1", 00:13:19.312 "uuid": "c3c9736e-6bae-5f1d-9808-488456b3c896", 00:13:19.312 "is_configured": true, 00:13:19.312 "data_offset": 2048, 00:13:19.312 "data_size": 63488 00:13:19.312 }, 00:13:19.312 { 00:13:19.312 "name": "BaseBdev2", 00:13:19.312 "uuid": "d292316c-2c98-5884-861d-734c52d500fd", 00:13:19.312 "is_configured": true, 00:13:19.312 "data_offset": 2048, 00:13:19.312 "data_size": 63488 00:13:19.312 }, 00:13:19.312 { 00:13:19.312 "name": "BaseBdev3", 00:13:19.312 "uuid": "f9ef7e95-ff5a-5af1-8af4-b652d394f0cc", 00:13:19.312 "is_configured": true, 00:13:19.312 "data_offset": 2048, 00:13:19.312 "data_size": 63488 00:13:19.312 }, 00:13:19.312 { 00:13:19.312 "name": "BaseBdev4", 00:13:19.312 "uuid": "a7654a49-97b1-525f-b09b-97f0983d5e1f", 00:13:19.312 "is_configured": true, 00:13:19.312 "data_offset": 2048, 00:13:19.312 "data_size": 63488 00:13:19.312 } 00:13:19.312 ] 00:13:19.312 }' 00:13:19.312 16:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.312 16:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.570 16:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:19.570 16:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.570 16:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.570 [2024-11-05 16:27:32.616597] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:19.570 [2024-11-05 16:27:32.616780] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:19.570 [2024-11-05 16:27:32.620026] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:19.570 [2024-11-05 16:27:32.620181] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:19.570 [2024-11-05 16:27:32.620253] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:19.570 [2024-11-05 16:27:32.620275] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:19.570 { 00:13:19.570 "results": [ 00:13:19.570 { 00:13:19.570 "job": "raid_bdev1", 00:13:19.570 "core_mask": "0x1", 00:13:19.570 "workload": "randrw", 00:13:19.570 "percentage": 50, 00:13:19.570 "status": "finished", 00:13:19.570 "queue_depth": 1, 00:13:19.570 "io_size": 131072, 00:13:19.570 "runtime": 1.338451, 00:13:19.570 "iops": 11744.920060577488, 00:13:19.570 "mibps": 1468.115007572186, 00:13:19.570 "io_failed": 1, 00:13:19.570 "io_timeout": 0, 00:13:19.570 "avg_latency_us": 119.95648887297578, 00:13:19.570 "min_latency_us": 29.065502183406114, 00:13:19.570 "max_latency_us": 1502.46288209607 00:13:19.570 } 00:13:19.570 ], 00:13:19.570 "core_count": 1 00:13:19.570 } 00:13:19.570 16:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.570 16:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73209 00:13:19.570 16:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 73209 ']' 00:13:19.570 16:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 73209 00:13:19.570 16:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:13:19.570 16:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:19.570 16:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73209 00:13:19.829 16:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:19.829 killing process with pid 73209 00:13:19.829 16:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:19.829 16:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73209' 00:13:19.829 16:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 73209 00:13:19.829 [2024-11-05 16:27:32.663386] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:19.829 16:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 73209 00:13:20.088 [2024-11-05 16:27:33.055810] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:21.463 16:27:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.s9yvE6uh3s 00:13:21.463 16:27:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:21.463 16:27:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:21.463 16:27:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:13:21.463 16:27:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:21.463 ************************************ 00:13:21.463 END TEST raid_read_error_test 00:13:21.463 ************************************ 00:13:21.463 16:27:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:21.463 16:27:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:21.463 16:27:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:13:21.463 00:13:21.463 real 0m5.091s 00:13:21.463 user 0m5.897s 00:13:21.463 sys 0m0.727s 00:13:21.463 16:27:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:21.463 16:27:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.463 16:27:34 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:13:21.463 16:27:34 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:21.463 16:27:34 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:21.463 16:27:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:21.463 ************************************ 00:13:21.463 START TEST raid_write_error_test 00:13:21.463 ************************************ 00:13:21.464 16:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 4 write 00:13:21.464 16:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:21.464 16:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:21.464 16:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:21.464 16:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:21.464 16:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:21.464 16:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:21.464 16:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:21.464 16:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:21.464 16:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:21.464 16:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:21.464 16:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:21.464 16:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:21.464 16:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:21.464 16:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:21.464 16:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:21.464 16:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:21.464 16:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:21.464 16:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:21.464 16:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:21.464 16:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:21.464 16:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:21.464 16:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:21.464 16:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:21.464 16:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:21.464 16:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:21.464 16:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:21.464 16:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:21.464 16:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:21.464 16:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.01bPLBRz4B 00:13:21.464 16:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73359 00:13:21.464 16:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:21.464 16:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73359 00:13:21.464 16:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 73359 ']' 00:13:21.464 16:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.464 16:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:21.464 16:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.464 16:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:21.464 16:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.722 [2024-11-05 16:27:34.626315] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:13:21.722 [2024-11-05 16:27:34.626433] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73359 ] 00:13:21.722 [2024-11-05 16:27:34.804119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.980 [2024-11-05 16:27:34.946802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.239 [2024-11-05 16:27:35.188248] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:22.239 [2024-11-05 16:27:35.188451] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:22.499 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:22.499 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:13:22.499 16:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:22.499 16:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:22.499 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.499 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.499 BaseBdev1_malloc 00:13:22.499 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.499 16:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:22.499 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.499 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.499 true 00:13:22.499 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.499 16:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:22.499 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.499 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.499 [2024-11-05 16:27:35.554254] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:22.499 [2024-11-05 16:27:35.554355] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.499 [2024-11-05 16:27:35.554387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:22.499 [2024-11-05 16:27:35.554400] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.499 [2024-11-05 16:27:35.557340] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.499 [2024-11-05 16:27:35.557486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:22.499 BaseBdev1 00:13:22.499 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.499 16:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:22.499 16:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:22.499 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.499 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.758 BaseBdev2_malloc 00:13:22.758 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.758 16:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:22.758 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.758 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.758 true 00:13:22.758 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.758 16:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:22.758 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.758 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.758 [2024-11-05 16:27:35.632930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:22.758 [2024-11-05 16:27:35.633105] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.758 [2024-11-05 16:27:35.633150] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:22.758 [2024-11-05 16:27:35.633182] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.758 [2024-11-05 16:27:35.635850] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.758 [2024-11-05 16:27:35.635941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:22.758 BaseBdev2 00:13:22.758 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.758 16:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:22.758 16:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:22.758 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.758 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.758 BaseBdev3_malloc 00:13:22.758 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.758 16:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:22.758 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.758 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.758 true 00:13:22.758 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.758 16:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:22.758 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.758 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.758 [2024-11-05 16:27:35.725042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:22.758 [2024-11-05 16:27:35.725215] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.758 [2024-11-05 16:27:35.725249] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:22.758 [2024-11-05 16:27:35.725261] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.758 [2024-11-05 16:27:35.728005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.758 [2024-11-05 16:27:35.728057] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:22.758 BaseBdev3 00:13:22.758 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.758 16:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:22.758 16:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:22.758 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.758 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.758 BaseBdev4_malloc 00:13:22.758 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.758 16:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:22.758 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.758 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.758 true 00:13:22.758 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.758 16:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:22.758 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.758 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.759 [2024-11-05 16:27:35.802377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:22.759 [2024-11-05 16:27:35.802561] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.759 [2024-11-05 16:27:35.802620] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:22.759 [2024-11-05 16:27:35.802656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.759 [2024-11-05 16:27:35.805405] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.759 [2024-11-05 16:27:35.805514] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:22.759 BaseBdev4 00:13:22.759 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.759 16:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:22.759 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.759 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.759 [2024-11-05 16:27:35.814460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:22.759 [2024-11-05 16:27:35.816853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:22.759 [2024-11-05 16:27:35.816997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:22.759 [2024-11-05 16:27:35.817108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:22.759 [2024-11-05 16:27:35.817476] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:22.759 [2024-11-05 16:27:35.817552] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:22.759 [2024-11-05 16:27:35.817933] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:22.759 [2024-11-05 16:27:35.818169] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:22.759 [2024-11-05 16:27:35.818213] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:22.759 [2024-11-05 16:27:35.818574] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.759 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.759 16:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:22.759 16:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:22.759 16:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.759 16:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:22.759 16:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:22.759 16:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:22.759 16:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.759 16:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.759 16:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.759 16:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.759 16:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.759 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.759 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.759 16:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.759 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.021 16:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.021 "name": "raid_bdev1", 00:13:23.021 "uuid": "b965ae09-13c4-4f1b-a9f9-a047ce99d169", 00:13:23.021 "strip_size_kb": 64, 00:13:23.021 "state": "online", 00:13:23.021 "raid_level": "concat", 00:13:23.021 "superblock": true, 00:13:23.021 "num_base_bdevs": 4, 00:13:23.021 "num_base_bdevs_discovered": 4, 00:13:23.021 "num_base_bdevs_operational": 4, 00:13:23.021 "base_bdevs_list": [ 00:13:23.021 { 00:13:23.021 "name": "BaseBdev1", 00:13:23.021 "uuid": "f070e01a-5adb-5220-b3b5-8fe1c557d4b3", 00:13:23.021 "is_configured": true, 00:13:23.021 "data_offset": 2048, 00:13:23.021 "data_size": 63488 00:13:23.021 }, 00:13:23.021 { 00:13:23.021 "name": "BaseBdev2", 00:13:23.021 "uuid": "2f2e0a8e-c7f3-54cd-a7d8-f465b623aaa3", 00:13:23.021 "is_configured": true, 00:13:23.021 "data_offset": 2048, 00:13:23.021 "data_size": 63488 00:13:23.021 }, 00:13:23.021 { 00:13:23.021 "name": "BaseBdev3", 00:13:23.021 "uuid": "045c18e1-26ee-5fca-9d95-f562d0fcbdfb", 00:13:23.021 "is_configured": true, 00:13:23.021 "data_offset": 2048, 00:13:23.021 "data_size": 63488 00:13:23.021 }, 00:13:23.021 { 00:13:23.021 "name": "BaseBdev4", 00:13:23.021 "uuid": "425b0a2b-ede3-5fd7-8212-085d83d53b6d", 00:13:23.021 "is_configured": true, 00:13:23.021 "data_offset": 2048, 00:13:23.021 "data_size": 63488 00:13:23.021 } 00:13:23.021 ] 00:13:23.022 }' 00:13:23.022 16:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.022 16:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.296 16:27:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:23.296 16:27:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:23.554 [2024-11-05 16:27:36.411303] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:24.492 16:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:24.492 16:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.492 16:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.492 16:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.492 16:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:24.492 16:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:24.492 16:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:24.492 16:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:24.492 16:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:24.492 16:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.492 16:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:24.492 16:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:24.492 16:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:24.492 16:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.492 16:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.492 16:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.492 16:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.492 16:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.492 16:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.492 16:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.492 16:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.492 16:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.492 16:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.492 "name": "raid_bdev1", 00:13:24.492 "uuid": "b965ae09-13c4-4f1b-a9f9-a047ce99d169", 00:13:24.492 "strip_size_kb": 64, 00:13:24.492 "state": "online", 00:13:24.492 "raid_level": "concat", 00:13:24.492 "superblock": true, 00:13:24.492 "num_base_bdevs": 4, 00:13:24.492 "num_base_bdevs_discovered": 4, 00:13:24.492 "num_base_bdevs_operational": 4, 00:13:24.492 "base_bdevs_list": [ 00:13:24.492 { 00:13:24.492 "name": "BaseBdev1", 00:13:24.492 "uuid": "f070e01a-5adb-5220-b3b5-8fe1c557d4b3", 00:13:24.492 "is_configured": true, 00:13:24.492 "data_offset": 2048, 00:13:24.492 "data_size": 63488 00:13:24.492 }, 00:13:24.492 { 00:13:24.492 "name": "BaseBdev2", 00:13:24.492 "uuid": "2f2e0a8e-c7f3-54cd-a7d8-f465b623aaa3", 00:13:24.492 "is_configured": true, 00:13:24.492 "data_offset": 2048, 00:13:24.492 "data_size": 63488 00:13:24.492 }, 00:13:24.492 { 00:13:24.492 "name": "BaseBdev3", 00:13:24.492 "uuid": "045c18e1-26ee-5fca-9d95-f562d0fcbdfb", 00:13:24.492 "is_configured": true, 00:13:24.492 "data_offset": 2048, 00:13:24.492 "data_size": 63488 00:13:24.492 }, 00:13:24.492 { 00:13:24.492 "name": "BaseBdev4", 00:13:24.492 "uuid": "425b0a2b-ede3-5fd7-8212-085d83d53b6d", 00:13:24.492 "is_configured": true, 00:13:24.492 "data_offset": 2048, 00:13:24.492 "data_size": 63488 00:13:24.492 } 00:13:24.492 ] 00:13:24.492 }' 00:13:24.492 16:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.492 16:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.751 16:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:24.751 16:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.751 16:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.751 [2024-11-05 16:27:37.793296] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:24.751 [2024-11-05 16:27:37.793455] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:24.751 [2024-11-05 16:27:37.796189] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:24.751 [2024-11-05 16:27:37.796266] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:24.751 [2024-11-05 16:27:37.796314] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:24.751 [2024-11-05 16:27:37.796331] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:24.751 { 00:13:24.751 "results": [ 00:13:24.751 { 00:13:24.751 "job": "raid_bdev1", 00:13:24.751 "core_mask": "0x1", 00:13:24.751 "workload": "randrw", 00:13:24.751 "percentage": 50, 00:13:24.751 "status": "finished", 00:13:24.751 "queue_depth": 1, 00:13:24.751 "io_size": 131072, 00:13:24.751 "runtime": 1.382516, 00:13:24.751 "iops": 12202.390424414618, 00:13:24.751 "mibps": 1525.2988030518272, 00:13:24.751 "io_failed": 1, 00:13:24.751 "io_timeout": 0, 00:13:24.751 "avg_latency_us": 115.5280658083857, 00:13:24.751 "min_latency_us": 28.17117903930131, 00:13:24.751 "max_latency_us": 1817.2646288209608 00:13:24.751 } 00:13:24.751 ], 00:13:24.751 "core_count": 1 00:13:24.751 } 00:13:24.751 16:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.751 16:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73359 00:13:24.751 16:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 73359 ']' 00:13:24.751 16:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 73359 00:13:24.751 16:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:13:24.751 16:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:24.751 16:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73359 00:13:24.751 killing process with pid 73359 00:13:24.751 16:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:24.751 16:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:24.751 16:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73359' 00:13:24.751 16:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 73359 00:13:24.751 [2024-11-05 16:27:37.838750] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:24.751 16:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 73359 00:13:25.318 [2024-11-05 16:27:38.233465] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:26.701 16:27:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:26.701 16:27:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.01bPLBRz4B 00:13:26.701 16:27:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:26.701 16:27:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:13:26.701 16:27:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:26.701 16:27:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:26.701 16:27:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:26.701 16:27:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:13:26.701 00:13:26.701 real 0m5.164s 00:13:26.701 user 0m6.006s 00:13:26.701 sys 0m0.684s 00:13:26.701 ************************************ 00:13:26.701 END TEST raid_write_error_test 00:13:26.701 ************************************ 00:13:26.701 16:27:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:26.701 16:27:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.701 16:27:39 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:26.701 16:27:39 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:13:26.701 16:27:39 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:26.701 16:27:39 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:26.701 16:27:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:26.701 ************************************ 00:13:26.701 START TEST raid_state_function_test 00:13:26.701 ************************************ 00:13:26.701 16:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 4 false 00:13:26.701 16:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:26.701 16:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:26.701 16:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:26.701 16:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:26.701 16:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:26.701 16:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:26.701 16:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:26.701 16:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:26.701 16:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:26.701 16:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:26.701 16:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:26.701 16:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:26.701 16:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:26.701 16:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:26.701 16:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:26.701 16:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:26.701 16:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:26.701 16:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:26.701 16:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:26.701 16:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:26.701 16:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:26.701 16:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:26.701 16:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:26.701 16:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:26.701 16:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:26.701 16:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:26.701 16:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:26.701 16:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:26.701 Process raid pid: 73507 00:13:26.701 16:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73507 00:13:26.701 16:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:26.701 16:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73507' 00:13:26.701 16:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73507 00:13:26.701 16:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 73507 ']' 00:13:26.701 16:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.701 16:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:26.701 16:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.701 16:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:26.701 16:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.962 [2024-11-05 16:27:39.855674] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:13:26.962 [2024-11-05 16:27:39.855816] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:26.962 [2024-11-05 16:27:40.043331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.221 [2024-11-05 16:27:40.204198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.479 [2024-11-05 16:27:40.470743] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:27.479 [2024-11-05 16:27:40.470828] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:27.737 16:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:27.737 16:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:13:27.737 16:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:27.737 16:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.737 16:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.737 [2024-11-05 16:27:40.741714] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:27.737 [2024-11-05 16:27:40.741794] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:27.737 [2024-11-05 16:27:40.741805] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:27.737 [2024-11-05 16:27:40.741816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:27.737 [2024-11-05 16:27:40.741823] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:27.737 [2024-11-05 16:27:40.741832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:27.737 [2024-11-05 16:27:40.741840] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:27.737 [2024-11-05 16:27:40.741849] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:27.737 16:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.737 16:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:27.737 16:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.737 16:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.737 16:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.737 16:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.737 16:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:27.737 16:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.737 16:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.737 16:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.737 16:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.738 16:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.738 16:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.738 16:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.738 16:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.738 16:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.738 16:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.738 "name": "Existed_Raid", 00:13:27.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.738 "strip_size_kb": 0, 00:13:27.738 "state": "configuring", 00:13:27.738 "raid_level": "raid1", 00:13:27.738 "superblock": false, 00:13:27.738 "num_base_bdevs": 4, 00:13:27.738 "num_base_bdevs_discovered": 0, 00:13:27.738 "num_base_bdevs_operational": 4, 00:13:27.738 "base_bdevs_list": [ 00:13:27.738 { 00:13:27.738 "name": "BaseBdev1", 00:13:27.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.738 "is_configured": false, 00:13:27.738 "data_offset": 0, 00:13:27.738 "data_size": 0 00:13:27.738 }, 00:13:27.738 { 00:13:27.738 "name": "BaseBdev2", 00:13:27.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.738 "is_configured": false, 00:13:27.738 "data_offset": 0, 00:13:27.738 "data_size": 0 00:13:27.738 }, 00:13:27.738 { 00:13:27.738 "name": "BaseBdev3", 00:13:27.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.738 "is_configured": false, 00:13:27.738 "data_offset": 0, 00:13:27.738 "data_size": 0 00:13:27.738 }, 00:13:27.738 { 00:13:27.738 "name": "BaseBdev4", 00:13:27.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.738 "is_configured": false, 00:13:27.738 "data_offset": 0, 00:13:27.738 "data_size": 0 00:13:27.738 } 00:13:27.738 ] 00:13:27.738 }' 00:13:27.738 16:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.738 16:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.304 16:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:28.304 16:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.304 16:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.304 [2024-11-05 16:27:41.216869] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:28.305 [2024-11-05 16:27:41.217030] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.305 [2024-11-05 16:27:41.224829] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:28.305 [2024-11-05 16:27:41.224950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:28.305 [2024-11-05 16:27:41.224991] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:28.305 [2024-11-05 16:27:41.225021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:28.305 [2024-11-05 16:27:41.225079] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:28.305 [2024-11-05 16:27:41.225119] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:28.305 [2024-11-05 16:27:41.225153] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:28.305 [2024-11-05 16:27:41.225183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.305 [2024-11-05 16:27:41.282858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:28.305 BaseBdev1 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.305 [ 00:13:28.305 { 00:13:28.305 "name": "BaseBdev1", 00:13:28.305 "aliases": [ 00:13:28.305 "c3b69ea4-89bd-4977-b539-35659b9d826d" 00:13:28.305 ], 00:13:28.305 "product_name": "Malloc disk", 00:13:28.305 "block_size": 512, 00:13:28.305 "num_blocks": 65536, 00:13:28.305 "uuid": "c3b69ea4-89bd-4977-b539-35659b9d826d", 00:13:28.305 "assigned_rate_limits": { 00:13:28.305 "rw_ios_per_sec": 0, 00:13:28.305 "rw_mbytes_per_sec": 0, 00:13:28.305 "r_mbytes_per_sec": 0, 00:13:28.305 "w_mbytes_per_sec": 0 00:13:28.305 }, 00:13:28.305 "claimed": true, 00:13:28.305 "claim_type": "exclusive_write", 00:13:28.305 "zoned": false, 00:13:28.305 "supported_io_types": { 00:13:28.305 "read": true, 00:13:28.305 "write": true, 00:13:28.305 "unmap": true, 00:13:28.305 "flush": true, 00:13:28.305 "reset": true, 00:13:28.305 "nvme_admin": false, 00:13:28.305 "nvme_io": false, 00:13:28.305 "nvme_io_md": false, 00:13:28.305 "write_zeroes": true, 00:13:28.305 "zcopy": true, 00:13:28.305 "get_zone_info": false, 00:13:28.305 "zone_management": false, 00:13:28.305 "zone_append": false, 00:13:28.305 "compare": false, 00:13:28.305 "compare_and_write": false, 00:13:28.305 "abort": true, 00:13:28.305 "seek_hole": false, 00:13:28.305 "seek_data": false, 00:13:28.305 "copy": true, 00:13:28.305 "nvme_iov_md": false 00:13:28.305 }, 00:13:28.305 "memory_domains": [ 00:13:28.305 { 00:13:28.305 "dma_device_id": "system", 00:13:28.305 "dma_device_type": 1 00:13:28.305 }, 00:13:28.305 { 00:13:28.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:28.305 "dma_device_type": 2 00:13:28.305 } 00:13:28.305 ], 00:13:28.305 "driver_specific": {} 00:13:28.305 } 00:13:28.305 ] 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.305 "name": "Existed_Raid", 00:13:28.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.305 "strip_size_kb": 0, 00:13:28.305 "state": "configuring", 00:13:28.305 "raid_level": "raid1", 00:13:28.305 "superblock": false, 00:13:28.305 "num_base_bdevs": 4, 00:13:28.305 "num_base_bdevs_discovered": 1, 00:13:28.305 "num_base_bdevs_operational": 4, 00:13:28.305 "base_bdevs_list": [ 00:13:28.305 { 00:13:28.305 "name": "BaseBdev1", 00:13:28.305 "uuid": "c3b69ea4-89bd-4977-b539-35659b9d826d", 00:13:28.305 "is_configured": true, 00:13:28.305 "data_offset": 0, 00:13:28.305 "data_size": 65536 00:13:28.305 }, 00:13:28.305 { 00:13:28.305 "name": "BaseBdev2", 00:13:28.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.305 "is_configured": false, 00:13:28.305 "data_offset": 0, 00:13:28.305 "data_size": 0 00:13:28.305 }, 00:13:28.305 { 00:13:28.305 "name": "BaseBdev3", 00:13:28.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.305 "is_configured": false, 00:13:28.305 "data_offset": 0, 00:13:28.305 "data_size": 0 00:13:28.305 }, 00:13:28.305 { 00:13:28.305 "name": "BaseBdev4", 00:13:28.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.305 "is_configured": false, 00:13:28.305 "data_offset": 0, 00:13:28.305 "data_size": 0 00:13:28.305 } 00:13:28.305 ] 00:13:28.305 }' 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.305 16:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.872 16:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:28.872 16:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.872 16:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.872 [2024-11-05 16:27:41.742193] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:28.872 [2024-11-05 16:27:41.742289] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:28.872 16:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.872 16:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:28.872 16:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.872 16:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.872 [2024-11-05 16:27:41.754205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:28.872 [2024-11-05 16:27:41.756702] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:28.872 [2024-11-05 16:27:41.756747] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:28.872 [2024-11-05 16:27:41.756759] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:28.872 [2024-11-05 16:27:41.756772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:28.872 [2024-11-05 16:27:41.756780] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:28.872 [2024-11-05 16:27:41.756789] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:28.872 16:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.872 16:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:28.872 16:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:28.872 16:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:28.872 16:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.872 16:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:28.872 16:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.872 16:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.872 16:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:28.872 16:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.872 16:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.872 16:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.872 16:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.872 16:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.872 16:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.872 16:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.872 16:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.872 16:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.872 16:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.872 "name": "Existed_Raid", 00:13:28.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.872 "strip_size_kb": 0, 00:13:28.872 "state": "configuring", 00:13:28.872 "raid_level": "raid1", 00:13:28.872 "superblock": false, 00:13:28.872 "num_base_bdevs": 4, 00:13:28.872 "num_base_bdevs_discovered": 1, 00:13:28.872 "num_base_bdevs_operational": 4, 00:13:28.872 "base_bdevs_list": [ 00:13:28.872 { 00:13:28.872 "name": "BaseBdev1", 00:13:28.872 "uuid": "c3b69ea4-89bd-4977-b539-35659b9d826d", 00:13:28.872 "is_configured": true, 00:13:28.872 "data_offset": 0, 00:13:28.872 "data_size": 65536 00:13:28.872 }, 00:13:28.872 { 00:13:28.872 "name": "BaseBdev2", 00:13:28.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.872 "is_configured": false, 00:13:28.872 "data_offset": 0, 00:13:28.872 "data_size": 0 00:13:28.872 }, 00:13:28.872 { 00:13:28.872 "name": "BaseBdev3", 00:13:28.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.872 "is_configured": false, 00:13:28.872 "data_offset": 0, 00:13:28.872 "data_size": 0 00:13:28.872 }, 00:13:28.872 { 00:13:28.872 "name": "BaseBdev4", 00:13:28.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.872 "is_configured": false, 00:13:28.872 "data_offset": 0, 00:13:28.872 "data_size": 0 00:13:28.872 } 00:13:28.872 ] 00:13:28.872 }' 00:13:28.872 16:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.872 16:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.130 16:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:29.130 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.130 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.390 [2024-11-05 16:27:42.249662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:29.390 BaseBdev2 00:13:29.390 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.390 16:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:29.390 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:29.390 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:29.390 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:29.390 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:29.390 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:29.390 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:29.390 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.390 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.390 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.390 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:29.390 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.390 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.390 [ 00:13:29.390 { 00:13:29.390 "name": "BaseBdev2", 00:13:29.390 "aliases": [ 00:13:29.390 "c3c67e97-ccd0-4a2b-85d2-3079518c2400" 00:13:29.390 ], 00:13:29.390 "product_name": "Malloc disk", 00:13:29.390 "block_size": 512, 00:13:29.390 "num_blocks": 65536, 00:13:29.390 "uuid": "c3c67e97-ccd0-4a2b-85d2-3079518c2400", 00:13:29.390 "assigned_rate_limits": { 00:13:29.390 "rw_ios_per_sec": 0, 00:13:29.390 "rw_mbytes_per_sec": 0, 00:13:29.390 "r_mbytes_per_sec": 0, 00:13:29.390 "w_mbytes_per_sec": 0 00:13:29.390 }, 00:13:29.390 "claimed": true, 00:13:29.390 "claim_type": "exclusive_write", 00:13:29.390 "zoned": false, 00:13:29.390 "supported_io_types": { 00:13:29.390 "read": true, 00:13:29.390 "write": true, 00:13:29.390 "unmap": true, 00:13:29.390 "flush": true, 00:13:29.390 "reset": true, 00:13:29.390 "nvme_admin": false, 00:13:29.390 "nvme_io": false, 00:13:29.390 "nvme_io_md": false, 00:13:29.390 "write_zeroes": true, 00:13:29.390 "zcopy": true, 00:13:29.390 "get_zone_info": false, 00:13:29.390 "zone_management": false, 00:13:29.390 "zone_append": false, 00:13:29.390 "compare": false, 00:13:29.390 "compare_and_write": false, 00:13:29.390 "abort": true, 00:13:29.390 "seek_hole": false, 00:13:29.390 "seek_data": false, 00:13:29.390 "copy": true, 00:13:29.390 "nvme_iov_md": false 00:13:29.390 }, 00:13:29.390 "memory_domains": [ 00:13:29.390 { 00:13:29.390 "dma_device_id": "system", 00:13:29.390 "dma_device_type": 1 00:13:29.390 }, 00:13:29.390 { 00:13:29.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.390 "dma_device_type": 2 00:13:29.390 } 00:13:29.390 ], 00:13:29.390 "driver_specific": {} 00:13:29.390 } 00:13:29.390 ] 00:13:29.390 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.390 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:29.390 16:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:29.390 16:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:29.390 16:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:29.390 16:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.390 16:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:29.390 16:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.390 16:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.390 16:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:29.390 16:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.390 16:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.390 16:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.390 16:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.390 16:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.390 16:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.390 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.390 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.390 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.390 16:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.390 "name": "Existed_Raid", 00:13:29.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.390 "strip_size_kb": 0, 00:13:29.390 "state": "configuring", 00:13:29.390 "raid_level": "raid1", 00:13:29.390 "superblock": false, 00:13:29.390 "num_base_bdevs": 4, 00:13:29.390 "num_base_bdevs_discovered": 2, 00:13:29.390 "num_base_bdevs_operational": 4, 00:13:29.390 "base_bdevs_list": [ 00:13:29.390 { 00:13:29.390 "name": "BaseBdev1", 00:13:29.390 "uuid": "c3b69ea4-89bd-4977-b539-35659b9d826d", 00:13:29.390 "is_configured": true, 00:13:29.390 "data_offset": 0, 00:13:29.390 "data_size": 65536 00:13:29.390 }, 00:13:29.390 { 00:13:29.390 "name": "BaseBdev2", 00:13:29.390 "uuid": "c3c67e97-ccd0-4a2b-85d2-3079518c2400", 00:13:29.390 "is_configured": true, 00:13:29.390 "data_offset": 0, 00:13:29.390 "data_size": 65536 00:13:29.390 }, 00:13:29.390 { 00:13:29.390 "name": "BaseBdev3", 00:13:29.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.390 "is_configured": false, 00:13:29.390 "data_offset": 0, 00:13:29.390 "data_size": 0 00:13:29.390 }, 00:13:29.390 { 00:13:29.390 "name": "BaseBdev4", 00:13:29.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.390 "is_configured": false, 00:13:29.390 "data_offset": 0, 00:13:29.390 "data_size": 0 00:13:29.390 } 00:13:29.390 ] 00:13:29.390 }' 00:13:29.390 16:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.390 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.650 16:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:29.650 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.650 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.909 [2024-11-05 16:27:42.742106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:29.909 BaseBdev3 00:13:29.909 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.909 16:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:29.909 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:13:29.909 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:29.909 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:29.909 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:29.909 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:29.909 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:29.909 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.909 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.909 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.909 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:29.909 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.909 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.909 [ 00:13:29.909 { 00:13:29.910 "name": "BaseBdev3", 00:13:29.910 "aliases": [ 00:13:29.910 "04f90de4-e342-4f0a-b259-f91140e0d2a9" 00:13:29.910 ], 00:13:29.910 "product_name": "Malloc disk", 00:13:29.910 "block_size": 512, 00:13:29.910 "num_blocks": 65536, 00:13:29.910 "uuid": "04f90de4-e342-4f0a-b259-f91140e0d2a9", 00:13:29.910 "assigned_rate_limits": { 00:13:29.910 "rw_ios_per_sec": 0, 00:13:29.910 "rw_mbytes_per_sec": 0, 00:13:29.910 "r_mbytes_per_sec": 0, 00:13:29.910 "w_mbytes_per_sec": 0 00:13:29.910 }, 00:13:29.910 "claimed": true, 00:13:29.910 "claim_type": "exclusive_write", 00:13:29.910 "zoned": false, 00:13:29.910 "supported_io_types": { 00:13:29.910 "read": true, 00:13:29.910 "write": true, 00:13:29.910 "unmap": true, 00:13:29.910 "flush": true, 00:13:29.910 "reset": true, 00:13:29.910 "nvme_admin": false, 00:13:29.910 "nvme_io": false, 00:13:29.910 "nvme_io_md": false, 00:13:29.910 "write_zeroes": true, 00:13:29.910 "zcopy": true, 00:13:29.910 "get_zone_info": false, 00:13:29.910 "zone_management": false, 00:13:29.910 "zone_append": false, 00:13:29.910 "compare": false, 00:13:29.910 "compare_and_write": false, 00:13:29.910 "abort": true, 00:13:29.910 "seek_hole": false, 00:13:29.910 "seek_data": false, 00:13:29.910 "copy": true, 00:13:29.910 "nvme_iov_md": false 00:13:29.910 }, 00:13:29.910 "memory_domains": [ 00:13:29.910 { 00:13:29.910 "dma_device_id": "system", 00:13:29.910 "dma_device_type": 1 00:13:29.910 }, 00:13:29.910 { 00:13:29.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.910 "dma_device_type": 2 00:13:29.910 } 00:13:29.910 ], 00:13:29.910 "driver_specific": {} 00:13:29.910 } 00:13:29.910 ] 00:13:29.910 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.910 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:29.910 16:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:29.910 16:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:29.910 16:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:29.910 16:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.910 16:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:29.910 16:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.910 16:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.910 16:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:29.910 16:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.910 16:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.910 16:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.910 16:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.910 16:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.910 16:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.910 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.910 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.910 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.910 16:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.910 "name": "Existed_Raid", 00:13:29.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.910 "strip_size_kb": 0, 00:13:29.910 "state": "configuring", 00:13:29.910 "raid_level": "raid1", 00:13:29.910 "superblock": false, 00:13:29.910 "num_base_bdevs": 4, 00:13:29.910 "num_base_bdevs_discovered": 3, 00:13:29.910 "num_base_bdevs_operational": 4, 00:13:29.910 "base_bdevs_list": [ 00:13:29.910 { 00:13:29.910 "name": "BaseBdev1", 00:13:29.910 "uuid": "c3b69ea4-89bd-4977-b539-35659b9d826d", 00:13:29.910 "is_configured": true, 00:13:29.910 "data_offset": 0, 00:13:29.910 "data_size": 65536 00:13:29.910 }, 00:13:29.910 { 00:13:29.910 "name": "BaseBdev2", 00:13:29.910 "uuid": "c3c67e97-ccd0-4a2b-85d2-3079518c2400", 00:13:29.910 "is_configured": true, 00:13:29.910 "data_offset": 0, 00:13:29.910 "data_size": 65536 00:13:29.910 }, 00:13:29.910 { 00:13:29.910 "name": "BaseBdev3", 00:13:29.910 "uuid": "04f90de4-e342-4f0a-b259-f91140e0d2a9", 00:13:29.910 "is_configured": true, 00:13:29.910 "data_offset": 0, 00:13:29.910 "data_size": 65536 00:13:29.910 }, 00:13:29.910 { 00:13:29.910 "name": "BaseBdev4", 00:13:29.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.910 "is_configured": false, 00:13:29.910 "data_offset": 0, 00:13:29.910 "data_size": 0 00:13:29.910 } 00:13:29.910 ] 00:13:29.910 }' 00:13:29.910 16:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.910 16:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.168 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:30.168 16:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.168 16:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.427 [2024-11-05 16:27:43.291138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:30.427 [2024-11-05 16:27:43.291213] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:30.427 [2024-11-05 16:27:43.291222] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:30.427 [2024-11-05 16:27:43.291584] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:30.427 [2024-11-05 16:27:43.291830] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:30.427 [2024-11-05 16:27:43.291846] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:30.427 [2024-11-05 16:27:43.292146] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:30.427 BaseBdev4 00:13:30.427 16:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.427 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:30.427 16:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:13:30.427 16:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:30.427 16:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:30.427 16:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:30.427 16:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:30.427 16:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:30.427 16:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.427 16:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.427 16:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.427 16:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:30.427 16:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.427 16:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.427 [ 00:13:30.427 { 00:13:30.427 "name": "BaseBdev4", 00:13:30.427 "aliases": [ 00:13:30.427 "75b27748-9db8-4cb2-baf0-d338752059da" 00:13:30.427 ], 00:13:30.427 "product_name": "Malloc disk", 00:13:30.427 "block_size": 512, 00:13:30.427 "num_blocks": 65536, 00:13:30.427 "uuid": "75b27748-9db8-4cb2-baf0-d338752059da", 00:13:30.427 "assigned_rate_limits": { 00:13:30.427 "rw_ios_per_sec": 0, 00:13:30.427 "rw_mbytes_per_sec": 0, 00:13:30.427 "r_mbytes_per_sec": 0, 00:13:30.427 "w_mbytes_per_sec": 0 00:13:30.427 }, 00:13:30.427 "claimed": true, 00:13:30.427 "claim_type": "exclusive_write", 00:13:30.427 "zoned": false, 00:13:30.427 "supported_io_types": { 00:13:30.427 "read": true, 00:13:30.427 "write": true, 00:13:30.427 "unmap": true, 00:13:30.427 "flush": true, 00:13:30.427 "reset": true, 00:13:30.427 "nvme_admin": false, 00:13:30.427 "nvme_io": false, 00:13:30.427 "nvme_io_md": false, 00:13:30.427 "write_zeroes": true, 00:13:30.427 "zcopy": true, 00:13:30.427 "get_zone_info": false, 00:13:30.427 "zone_management": false, 00:13:30.427 "zone_append": false, 00:13:30.428 "compare": false, 00:13:30.428 "compare_and_write": false, 00:13:30.428 "abort": true, 00:13:30.428 "seek_hole": false, 00:13:30.428 "seek_data": false, 00:13:30.428 "copy": true, 00:13:30.428 "nvme_iov_md": false 00:13:30.428 }, 00:13:30.428 "memory_domains": [ 00:13:30.428 { 00:13:30.428 "dma_device_id": "system", 00:13:30.428 "dma_device_type": 1 00:13:30.428 }, 00:13:30.428 { 00:13:30.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.428 "dma_device_type": 2 00:13:30.428 } 00:13:30.428 ], 00:13:30.428 "driver_specific": {} 00:13:30.428 } 00:13:30.428 ] 00:13:30.428 16:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.428 16:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:30.428 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:30.428 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:30.428 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:30.428 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:30.428 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.428 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.428 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.428 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:30.428 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.428 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.428 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.428 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.428 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.428 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.428 16:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.428 16:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.428 16:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.428 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.428 "name": "Existed_Raid", 00:13:30.428 "uuid": "d492db78-21dc-40c0-ba9e-44153dfd2db3", 00:13:30.428 "strip_size_kb": 0, 00:13:30.428 "state": "online", 00:13:30.428 "raid_level": "raid1", 00:13:30.428 "superblock": false, 00:13:30.428 "num_base_bdevs": 4, 00:13:30.428 "num_base_bdevs_discovered": 4, 00:13:30.428 "num_base_bdevs_operational": 4, 00:13:30.428 "base_bdevs_list": [ 00:13:30.428 { 00:13:30.428 "name": "BaseBdev1", 00:13:30.428 "uuid": "c3b69ea4-89bd-4977-b539-35659b9d826d", 00:13:30.428 "is_configured": true, 00:13:30.428 "data_offset": 0, 00:13:30.428 "data_size": 65536 00:13:30.428 }, 00:13:30.428 { 00:13:30.428 "name": "BaseBdev2", 00:13:30.428 "uuid": "c3c67e97-ccd0-4a2b-85d2-3079518c2400", 00:13:30.428 "is_configured": true, 00:13:30.428 "data_offset": 0, 00:13:30.428 "data_size": 65536 00:13:30.428 }, 00:13:30.428 { 00:13:30.428 "name": "BaseBdev3", 00:13:30.428 "uuid": "04f90de4-e342-4f0a-b259-f91140e0d2a9", 00:13:30.428 "is_configured": true, 00:13:30.428 "data_offset": 0, 00:13:30.428 "data_size": 65536 00:13:30.428 }, 00:13:30.428 { 00:13:30.428 "name": "BaseBdev4", 00:13:30.428 "uuid": "75b27748-9db8-4cb2-baf0-d338752059da", 00:13:30.428 "is_configured": true, 00:13:30.428 "data_offset": 0, 00:13:30.428 "data_size": 65536 00:13:30.428 } 00:13:30.428 ] 00:13:30.428 }' 00:13:30.428 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.428 16:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.686 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:30.686 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:30.686 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:30.686 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:30.686 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:30.686 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:30.686 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:30.686 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:30.686 16:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.686 16:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.944 [2024-11-05 16:27:43.778855] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:30.944 16:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.944 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:30.944 "name": "Existed_Raid", 00:13:30.944 "aliases": [ 00:13:30.944 "d492db78-21dc-40c0-ba9e-44153dfd2db3" 00:13:30.944 ], 00:13:30.944 "product_name": "Raid Volume", 00:13:30.945 "block_size": 512, 00:13:30.945 "num_blocks": 65536, 00:13:30.945 "uuid": "d492db78-21dc-40c0-ba9e-44153dfd2db3", 00:13:30.945 "assigned_rate_limits": { 00:13:30.945 "rw_ios_per_sec": 0, 00:13:30.945 "rw_mbytes_per_sec": 0, 00:13:30.945 "r_mbytes_per_sec": 0, 00:13:30.945 "w_mbytes_per_sec": 0 00:13:30.945 }, 00:13:30.945 "claimed": false, 00:13:30.945 "zoned": false, 00:13:30.945 "supported_io_types": { 00:13:30.945 "read": true, 00:13:30.945 "write": true, 00:13:30.945 "unmap": false, 00:13:30.945 "flush": false, 00:13:30.945 "reset": true, 00:13:30.945 "nvme_admin": false, 00:13:30.945 "nvme_io": false, 00:13:30.945 "nvme_io_md": false, 00:13:30.945 "write_zeroes": true, 00:13:30.945 "zcopy": false, 00:13:30.945 "get_zone_info": false, 00:13:30.945 "zone_management": false, 00:13:30.945 "zone_append": false, 00:13:30.945 "compare": false, 00:13:30.945 "compare_and_write": false, 00:13:30.945 "abort": false, 00:13:30.945 "seek_hole": false, 00:13:30.945 "seek_data": false, 00:13:30.945 "copy": false, 00:13:30.945 "nvme_iov_md": false 00:13:30.945 }, 00:13:30.945 "memory_domains": [ 00:13:30.945 { 00:13:30.945 "dma_device_id": "system", 00:13:30.945 "dma_device_type": 1 00:13:30.945 }, 00:13:30.945 { 00:13:30.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.945 "dma_device_type": 2 00:13:30.945 }, 00:13:30.945 { 00:13:30.945 "dma_device_id": "system", 00:13:30.945 "dma_device_type": 1 00:13:30.945 }, 00:13:30.945 { 00:13:30.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.945 "dma_device_type": 2 00:13:30.945 }, 00:13:30.945 { 00:13:30.945 "dma_device_id": "system", 00:13:30.945 "dma_device_type": 1 00:13:30.945 }, 00:13:30.945 { 00:13:30.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.945 "dma_device_type": 2 00:13:30.945 }, 00:13:30.945 { 00:13:30.945 "dma_device_id": "system", 00:13:30.945 "dma_device_type": 1 00:13:30.945 }, 00:13:30.945 { 00:13:30.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.945 "dma_device_type": 2 00:13:30.945 } 00:13:30.945 ], 00:13:30.945 "driver_specific": { 00:13:30.945 "raid": { 00:13:30.945 "uuid": "d492db78-21dc-40c0-ba9e-44153dfd2db3", 00:13:30.945 "strip_size_kb": 0, 00:13:30.945 "state": "online", 00:13:30.945 "raid_level": "raid1", 00:13:30.945 "superblock": false, 00:13:30.945 "num_base_bdevs": 4, 00:13:30.945 "num_base_bdevs_discovered": 4, 00:13:30.945 "num_base_bdevs_operational": 4, 00:13:30.945 "base_bdevs_list": [ 00:13:30.945 { 00:13:30.945 "name": "BaseBdev1", 00:13:30.945 "uuid": "c3b69ea4-89bd-4977-b539-35659b9d826d", 00:13:30.945 "is_configured": true, 00:13:30.945 "data_offset": 0, 00:13:30.945 "data_size": 65536 00:13:30.945 }, 00:13:30.945 { 00:13:30.945 "name": "BaseBdev2", 00:13:30.945 "uuid": "c3c67e97-ccd0-4a2b-85d2-3079518c2400", 00:13:30.945 "is_configured": true, 00:13:30.945 "data_offset": 0, 00:13:30.945 "data_size": 65536 00:13:30.945 }, 00:13:30.945 { 00:13:30.945 "name": "BaseBdev3", 00:13:30.945 "uuid": "04f90de4-e342-4f0a-b259-f91140e0d2a9", 00:13:30.945 "is_configured": true, 00:13:30.945 "data_offset": 0, 00:13:30.945 "data_size": 65536 00:13:30.945 }, 00:13:30.945 { 00:13:30.945 "name": "BaseBdev4", 00:13:30.945 "uuid": "75b27748-9db8-4cb2-baf0-d338752059da", 00:13:30.945 "is_configured": true, 00:13:30.945 "data_offset": 0, 00:13:30.945 "data_size": 65536 00:13:30.945 } 00:13:30.945 ] 00:13:30.945 } 00:13:30.945 } 00:13:30.945 }' 00:13:30.945 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:30.945 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:30.945 BaseBdev2 00:13:30.945 BaseBdev3 00:13:30.945 BaseBdev4' 00:13:30.945 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.945 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:30.945 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.945 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.945 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:30.945 16:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.945 16:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.945 16:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.945 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:30.945 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:30.945 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.945 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:30.945 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.945 16:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.945 16:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.945 16:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.945 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:30.945 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:30.945 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.945 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.945 16:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:30.945 16:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.945 16:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.945 16:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.205 16:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:31.205 16:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:31.205 16:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:31.205 16:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:31.205 16:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.205 16:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.205 16:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:31.205 16:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.205 16:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:31.205 16:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:31.205 16:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:31.205 16:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.205 16:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.205 [2024-11-05 16:27:44.097989] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:31.205 16:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.205 16:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:31.205 16:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:31.205 16:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:31.205 16:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:31.205 16:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:31.205 16:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:31.205 16:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:31.205 16:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.205 16:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.205 16:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.205 16:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:31.205 16:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.205 16:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.205 16:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.205 16:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.205 16:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.205 16:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.205 16:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.205 16:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.205 16:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.205 16:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.205 "name": "Existed_Raid", 00:13:31.205 "uuid": "d492db78-21dc-40c0-ba9e-44153dfd2db3", 00:13:31.205 "strip_size_kb": 0, 00:13:31.205 "state": "online", 00:13:31.205 "raid_level": "raid1", 00:13:31.205 "superblock": false, 00:13:31.205 "num_base_bdevs": 4, 00:13:31.205 "num_base_bdevs_discovered": 3, 00:13:31.205 "num_base_bdevs_operational": 3, 00:13:31.205 "base_bdevs_list": [ 00:13:31.205 { 00:13:31.205 "name": null, 00:13:31.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.205 "is_configured": false, 00:13:31.205 "data_offset": 0, 00:13:31.205 "data_size": 65536 00:13:31.205 }, 00:13:31.205 { 00:13:31.205 "name": "BaseBdev2", 00:13:31.205 "uuid": "c3c67e97-ccd0-4a2b-85d2-3079518c2400", 00:13:31.205 "is_configured": true, 00:13:31.205 "data_offset": 0, 00:13:31.205 "data_size": 65536 00:13:31.205 }, 00:13:31.205 { 00:13:31.205 "name": "BaseBdev3", 00:13:31.205 "uuid": "04f90de4-e342-4f0a-b259-f91140e0d2a9", 00:13:31.205 "is_configured": true, 00:13:31.205 "data_offset": 0, 00:13:31.205 "data_size": 65536 00:13:31.205 }, 00:13:31.205 { 00:13:31.205 "name": "BaseBdev4", 00:13:31.205 "uuid": "75b27748-9db8-4cb2-baf0-d338752059da", 00:13:31.205 "is_configured": true, 00:13:31.205 "data_offset": 0, 00:13:31.205 "data_size": 65536 00:13:31.205 } 00:13:31.205 ] 00:13:31.205 }' 00:13:31.205 16:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.205 16:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.773 16:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:31.773 16:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:31.773 16:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.773 16:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.773 16:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:31.773 16:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.773 16:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.773 16:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:31.773 16:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:31.773 16:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:31.773 16:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.773 16:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.773 [2024-11-05 16:27:44.719544] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:31.773 16:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.773 16:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:31.773 16:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:31.773 16:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.773 16:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:31.773 16:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.773 16:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.033 16:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.033 16:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:32.033 16:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:32.033 16:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:32.033 16:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.033 16:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.033 [2024-11-05 16:27:44.899140] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:32.033 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.033 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:32.033 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:32.033 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.033 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:32.033 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.033 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.033 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.033 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:32.033 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:32.033 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:32.033 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.033 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.033 [2024-11-05 16:27:45.082218] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:32.033 [2024-11-05 16:27:45.082458] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:32.292 [2024-11-05 16:27:45.203633] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:32.292 [2024-11-05 16:27:45.203715] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:32.292 [2024-11-05 16:27:45.203731] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:32.292 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.292 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:32.292 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:32.292 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.292 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:32.292 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.292 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.292 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.292 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:32.292 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:32.292 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:32.292 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:32.292 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:32.292 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:32.292 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.292 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.292 BaseBdev2 00:13:32.292 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.292 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:32.292 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:32.292 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:32.292 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:32.292 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:32.292 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:32.292 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:32.292 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.292 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.292 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.292 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:32.292 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.292 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.292 [ 00:13:32.292 { 00:13:32.292 "name": "BaseBdev2", 00:13:32.292 "aliases": [ 00:13:32.292 "d0422434-0d27-4264-822e-bdace5ddfc29" 00:13:32.292 ], 00:13:32.292 "product_name": "Malloc disk", 00:13:32.292 "block_size": 512, 00:13:32.292 "num_blocks": 65536, 00:13:32.292 "uuid": "d0422434-0d27-4264-822e-bdace5ddfc29", 00:13:32.293 "assigned_rate_limits": { 00:13:32.293 "rw_ios_per_sec": 0, 00:13:32.293 "rw_mbytes_per_sec": 0, 00:13:32.293 "r_mbytes_per_sec": 0, 00:13:32.293 "w_mbytes_per_sec": 0 00:13:32.293 }, 00:13:32.293 "claimed": false, 00:13:32.293 "zoned": false, 00:13:32.293 "supported_io_types": { 00:13:32.293 "read": true, 00:13:32.293 "write": true, 00:13:32.293 "unmap": true, 00:13:32.293 "flush": true, 00:13:32.293 "reset": true, 00:13:32.293 "nvme_admin": false, 00:13:32.293 "nvme_io": false, 00:13:32.293 "nvme_io_md": false, 00:13:32.293 "write_zeroes": true, 00:13:32.293 "zcopy": true, 00:13:32.293 "get_zone_info": false, 00:13:32.293 "zone_management": false, 00:13:32.293 "zone_append": false, 00:13:32.293 "compare": false, 00:13:32.293 "compare_and_write": false, 00:13:32.293 "abort": true, 00:13:32.293 "seek_hole": false, 00:13:32.293 "seek_data": false, 00:13:32.293 "copy": true, 00:13:32.293 "nvme_iov_md": false 00:13:32.293 }, 00:13:32.293 "memory_domains": [ 00:13:32.293 { 00:13:32.293 "dma_device_id": "system", 00:13:32.293 "dma_device_type": 1 00:13:32.293 }, 00:13:32.293 { 00:13:32.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.293 "dma_device_type": 2 00:13:32.293 } 00:13:32.293 ], 00:13:32.293 "driver_specific": {} 00:13:32.293 } 00:13:32.293 ] 00:13:32.293 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.293 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:32.293 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:32.293 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:32.293 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:32.293 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.293 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.553 BaseBdev3 00:13:32.553 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.553 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:32.553 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:13:32.553 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:32.553 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:32.553 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:32.553 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:32.553 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:32.553 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.553 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.553 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.553 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:32.553 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.553 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.553 [ 00:13:32.553 { 00:13:32.553 "name": "BaseBdev3", 00:13:32.553 "aliases": [ 00:13:32.553 "72fc1e8a-fe36-4482-98a2-719cf2349108" 00:13:32.553 ], 00:13:32.553 "product_name": "Malloc disk", 00:13:32.553 "block_size": 512, 00:13:32.553 "num_blocks": 65536, 00:13:32.553 "uuid": "72fc1e8a-fe36-4482-98a2-719cf2349108", 00:13:32.553 "assigned_rate_limits": { 00:13:32.553 "rw_ios_per_sec": 0, 00:13:32.553 "rw_mbytes_per_sec": 0, 00:13:32.553 "r_mbytes_per_sec": 0, 00:13:32.553 "w_mbytes_per_sec": 0 00:13:32.553 }, 00:13:32.553 "claimed": false, 00:13:32.553 "zoned": false, 00:13:32.553 "supported_io_types": { 00:13:32.553 "read": true, 00:13:32.553 "write": true, 00:13:32.553 "unmap": true, 00:13:32.553 "flush": true, 00:13:32.553 "reset": true, 00:13:32.553 "nvme_admin": false, 00:13:32.553 "nvme_io": false, 00:13:32.553 "nvme_io_md": false, 00:13:32.553 "write_zeroes": true, 00:13:32.553 "zcopy": true, 00:13:32.553 "get_zone_info": false, 00:13:32.553 "zone_management": false, 00:13:32.553 "zone_append": false, 00:13:32.553 "compare": false, 00:13:32.553 "compare_and_write": false, 00:13:32.553 "abort": true, 00:13:32.553 "seek_hole": false, 00:13:32.553 "seek_data": false, 00:13:32.553 "copy": true, 00:13:32.553 "nvme_iov_md": false 00:13:32.553 }, 00:13:32.553 "memory_domains": [ 00:13:32.553 { 00:13:32.553 "dma_device_id": "system", 00:13:32.553 "dma_device_type": 1 00:13:32.553 }, 00:13:32.553 { 00:13:32.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.553 "dma_device_type": 2 00:13:32.553 } 00:13:32.553 ], 00:13:32.553 "driver_specific": {} 00:13:32.553 } 00:13:32.553 ] 00:13:32.553 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.553 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:32.553 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:32.553 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:32.553 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:32.553 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.553 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.553 BaseBdev4 00:13:32.553 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.553 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:32.553 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:13:32.554 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:32.554 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:32.554 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:32.554 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:32.554 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:32.554 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.554 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.554 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.554 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:32.554 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.554 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.554 [ 00:13:32.554 { 00:13:32.554 "name": "BaseBdev4", 00:13:32.554 "aliases": [ 00:13:32.554 "0f92552b-6009-4e95-bf12-d7808f3226fd" 00:13:32.554 ], 00:13:32.554 "product_name": "Malloc disk", 00:13:32.554 "block_size": 512, 00:13:32.554 "num_blocks": 65536, 00:13:32.554 "uuid": "0f92552b-6009-4e95-bf12-d7808f3226fd", 00:13:32.554 "assigned_rate_limits": { 00:13:32.554 "rw_ios_per_sec": 0, 00:13:32.554 "rw_mbytes_per_sec": 0, 00:13:32.554 "r_mbytes_per_sec": 0, 00:13:32.554 "w_mbytes_per_sec": 0 00:13:32.554 }, 00:13:32.554 "claimed": false, 00:13:32.554 "zoned": false, 00:13:32.554 "supported_io_types": { 00:13:32.554 "read": true, 00:13:32.554 "write": true, 00:13:32.554 "unmap": true, 00:13:32.554 "flush": true, 00:13:32.554 "reset": true, 00:13:32.554 "nvme_admin": false, 00:13:32.554 "nvme_io": false, 00:13:32.554 "nvme_io_md": false, 00:13:32.554 "write_zeroes": true, 00:13:32.554 "zcopy": true, 00:13:32.554 "get_zone_info": false, 00:13:32.554 "zone_management": false, 00:13:32.554 "zone_append": false, 00:13:32.554 "compare": false, 00:13:32.554 "compare_and_write": false, 00:13:32.554 "abort": true, 00:13:32.554 "seek_hole": false, 00:13:32.554 "seek_data": false, 00:13:32.554 "copy": true, 00:13:32.554 "nvme_iov_md": false 00:13:32.554 }, 00:13:32.554 "memory_domains": [ 00:13:32.554 { 00:13:32.554 "dma_device_id": "system", 00:13:32.554 "dma_device_type": 1 00:13:32.554 }, 00:13:32.554 { 00:13:32.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.554 "dma_device_type": 2 00:13:32.554 } 00:13:32.554 ], 00:13:32.554 "driver_specific": {} 00:13:32.554 } 00:13:32.554 ] 00:13:32.554 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.554 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:32.554 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:32.554 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:32.554 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:32.554 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.554 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.554 [2024-11-05 16:27:45.543682] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:32.554 [2024-11-05 16:27:45.543846] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:32.554 [2024-11-05 16:27:45.543902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:32.554 [2024-11-05 16:27:45.546499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:32.554 [2024-11-05 16:27:45.546648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:32.554 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.554 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:32.554 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:32.554 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:32.554 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.554 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.554 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:32.554 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.554 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.554 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.554 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.554 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.554 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.554 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.554 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:32.554 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.554 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.554 "name": "Existed_Raid", 00:13:32.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.554 "strip_size_kb": 0, 00:13:32.554 "state": "configuring", 00:13:32.554 "raid_level": "raid1", 00:13:32.554 "superblock": false, 00:13:32.554 "num_base_bdevs": 4, 00:13:32.554 "num_base_bdevs_discovered": 3, 00:13:32.554 "num_base_bdevs_operational": 4, 00:13:32.554 "base_bdevs_list": [ 00:13:32.554 { 00:13:32.554 "name": "BaseBdev1", 00:13:32.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.554 "is_configured": false, 00:13:32.554 "data_offset": 0, 00:13:32.554 "data_size": 0 00:13:32.554 }, 00:13:32.554 { 00:13:32.554 "name": "BaseBdev2", 00:13:32.554 "uuid": "d0422434-0d27-4264-822e-bdace5ddfc29", 00:13:32.554 "is_configured": true, 00:13:32.554 "data_offset": 0, 00:13:32.554 "data_size": 65536 00:13:32.554 }, 00:13:32.554 { 00:13:32.554 "name": "BaseBdev3", 00:13:32.554 "uuid": "72fc1e8a-fe36-4482-98a2-719cf2349108", 00:13:32.554 "is_configured": true, 00:13:32.554 "data_offset": 0, 00:13:32.554 "data_size": 65536 00:13:32.554 }, 00:13:32.554 { 00:13:32.554 "name": "BaseBdev4", 00:13:32.554 "uuid": "0f92552b-6009-4e95-bf12-d7808f3226fd", 00:13:32.554 "is_configured": true, 00:13:32.554 "data_offset": 0, 00:13:32.554 "data_size": 65536 00:13:32.554 } 00:13:32.554 ] 00:13:32.554 }' 00:13:32.554 16:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.554 16:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.124 16:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:33.124 16:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.124 16:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.124 [2024-11-05 16:27:46.050854] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:33.124 16:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.124 16:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:33.124 16:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:33.124 16:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:33.124 16:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.124 16:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.124 16:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:33.124 16:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.124 16:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.124 16:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.124 16:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.124 16:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.124 16:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.124 16:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.124 16:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.124 16:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.124 16:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.124 "name": "Existed_Raid", 00:13:33.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.124 "strip_size_kb": 0, 00:13:33.124 "state": "configuring", 00:13:33.124 "raid_level": "raid1", 00:13:33.124 "superblock": false, 00:13:33.124 "num_base_bdevs": 4, 00:13:33.124 "num_base_bdevs_discovered": 2, 00:13:33.124 "num_base_bdevs_operational": 4, 00:13:33.124 "base_bdevs_list": [ 00:13:33.124 { 00:13:33.124 "name": "BaseBdev1", 00:13:33.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.125 "is_configured": false, 00:13:33.125 "data_offset": 0, 00:13:33.125 "data_size": 0 00:13:33.125 }, 00:13:33.125 { 00:13:33.125 "name": null, 00:13:33.125 "uuid": "d0422434-0d27-4264-822e-bdace5ddfc29", 00:13:33.125 "is_configured": false, 00:13:33.125 "data_offset": 0, 00:13:33.125 "data_size": 65536 00:13:33.125 }, 00:13:33.125 { 00:13:33.125 "name": "BaseBdev3", 00:13:33.125 "uuid": "72fc1e8a-fe36-4482-98a2-719cf2349108", 00:13:33.125 "is_configured": true, 00:13:33.125 "data_offset": 0, 00:13:33.125 "data_size": 65536 00:13:33.125 }, 00:13:33.125 { 00:13:33.125 "name": "BaseBdev4", 00:13:33.125 "uuid": "0f92552b-6009-4e95-bf12-d7808f3226fd", 00:13:33.125 "is_configured": true, 00:13:33.125 "data_offset": 0, 00:13:33.125 "data_size": 65536 00:13:33.125 } 00:13:33.125 ] 00:13:33.125 }' 00:13:33.125 16:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.125 16:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.695 16:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.695 16:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:33.695 16:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.695 16:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.695 16:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.695 16:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:33.695 16:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:33.695 16:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.695 16:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.695 [2024-11-05 16:27:46.575672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:33.695 BaseBdev1 00:13:33.695 16:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.695 16:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:33.695 16:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:13:33.695 16:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:33.695 16:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:33.695 16:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:33.695 16:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:33.695 16:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:33.695 16:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.695 16:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.695 16:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.695 16:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:33.695 16:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.695 16:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.696 [ 00:13:33.696 { 00:13:33.696 "name": "BaseBdev1", 00:13:33.696 "aliases": [ 00:13:33.696 "21743748-ba5d-4f76-9adc-758784fb391e" 00:13:33.696 ], 00:13:33.696 "product_name": "Malloc disk", 00:13:33.696 "block_size": 512, 00:13:33.696 "num_blocks": 65536, 00:13:33.696 "uuid": "21743748-ba5d-4f76-9adc-758784fb391e", 00:13:33.696 "assigned_rate_limits": { 00:13:33.696 "rw_ios_per_sec": 0, 00:13:33.696 "rw_mbytes_per_sec": 0, 00:13:33.696 "r_mbytes_per_sec": 0, 00:13:33.696 "w_mbytes_per_sec": 0 00:13:33.696 }, 00:13:33.696 "claimed": true, 00:13:33.696 "claim_type": "exclusive_write", 00:13:33.696 "zoned": false, 00:13:33.696 "supported_io_types": { 00:13:33.696 "read": true, 00:13:33.696 "write": true, 00:13:33.696 "unmap": true, 00:13:33.696 "flush": true, 00:13:33.696 "reset": true, 00:13:33.696 "nvme_admin": false, 00:13:33.696 "nvme_io": false, 00:13:33.696 "nvme_io_md": false, 00:13:33.696 "write_zeroes": true, 00:13:33.696 "zcopy": true, 00:13:33.696 "get_zone_info": false, 00:13:33.696 "zone_management": false, 00:13:33.696 "zone_append": false, 00:13:33.696 "compare": false, 00:13:33.696 "compare_and_write": false, 00:13:33.696 "abort": true, 00:13:33.696 "seek_hole": false, 00:13:33.696 "seek_data": false, 00:13:33.696 "copy": true, 00:13:33.696 "nvme_iov_md": false 00:13:33.696 }, 00:13:33.696 "memory_domains": [ 00:13:33.696 { 00:13:33.696 "dma_device_id": "system", 00:13:33.696 "dma_device_type": 1 00:13:33.696 }, 00:13:33.696 { 00:13:33.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.696 "dma_device_type": 2 00:13:33.696 } 00:13:33.696 ], 00:13:33.696 "driver_specific": {} 00:13:33.696 } 00:13:33.696 ] 00:13:33.696 16:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.696 16:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:33.696 16:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:33.696 16:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:33.696 16:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:33.696 16:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.696 16:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.696 16:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:33.696 16:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.696 16:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.696 16:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.696 16:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.696 16:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.696 16:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.696 16:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.696 16:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.696 16:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.696 16:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.696 "name": "Existed_Raid", 00:13:33.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.696 "strip_size_kb": 0, 00:13:33.696 "state": "configuring", 00:13:33.696 "raid_level": "raid1", 00:13:33.696 "superblock": false, 00:13:33.696 "num_base_bdevs": 4, 00:13:33.696 "num_base_bdevs_discovered": 3, 00:13:33.696 "num_base_bdevs_operational": 4, 00:13:33.696 "base_bdevs_list": [ 00:13:33.696 { 00:13:33.696 "name": "BaseBdev1", 00:13:33.696 "uuid": "21743748-ba5d-4f76-9adc-758784fb391e", 00:13:33.696 "is_configured": true, 00:13:33.696 "data_offset": 0, 00:13:33.696 "data_size": 65536 00:13:33.696 }, 00:13:33.696 { 00:13:33.696 "name": null, 00:13:33.696 "uuid": "d0422434-0d27-4264-822e-bdace5ddfc29", 00:13:33.696 "is_configured": false, 00:13:33.696 "data_offset": 0, 00:13:33.696 "data_size": 65536 00:13:33.696 }, 00:13:33.696 { 00:13:33.696 "name": "BaseBdev3", 00:13:33.696 "uuid": "72fc1e8a-fe36-4482-98a2-719cf2349108", 00:13:33.696 "is_configured": true, 00:13:33.696 "data_offset": 0, 00:13:33.696 "data_size": 65536 00:13:33.696 }, 00:13:33.696 { 00:13:33.696 "name": "BaseBdev4", 00:13:33.696 "uuid": "0f92552b-6009-4e95-bf12-d7808f3226fd", 00:13:33.696 "is_configured": true, 00:13:33.696 "data_offset": 0, 00:13:33.696 "data_size": 65536 00:13:33.696 } 00:13:33.696 ] 00:13:33.696 }' 00:13:33.696 16:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.696 16:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.265 16:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:34.265 16:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.265 16:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.265 16:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.266 16:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.266 16:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:34.266 16:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:34.266 16:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.266 16:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.266 [2024-11-05 16:27:47.126824] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:34.266 16:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.266 16:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:34.266 16:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:34.266 16:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:34.266 16:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.266 16:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.266 16:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:34.266 16:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.266 16:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.266 16:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.266 16:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.266 16:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.266 16:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.266 16:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.266 16:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.266 16:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.266 16:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.266 "name": "Existed_Raid", 00:13:34.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.266 "strip_size_kb": 0, 00:13:34.266 "state": "configuring", 00:13:34.266 "raid_level": "raid1", 00:13:34.266 "superblock": false, 00:13:34.266 "num_base_bdevs": 4, 00:13:34.266 "num_base_bdevs_discovered": 2, 00:13:34.266 "num_base_bdevs_operational": 4, 00:13:34.266 "base_bdevs_list": [ 00:13:34.266 { 00:13:34.266 "name": "BaseBdev1", 00:13:34.266 "uuid": "21743748-ba5d-4f76-9adc-758784fb391e", 00:13:34.266 "is_configured": true, 00:13:34.266 "data_offset": 0, 00:13:34.266 "data_size": 65536 00:13:34.266 }, 00:13:34.266 { 00:13:34.266 "name": null, 00:13:34.266 "uuid": "d0422434-0d27-4264-822e-bdace5ddfc29", 00:13:34.266 "is_configured": false, 00:13:34.266 "data_offset": 0, 00:13:34.266 "data_size": 65536 00:13:34.266 }, 00:13:34.266 { 00:13:34.266 "name": null, 00:13:34.266 "uuid": "72fc1e8a-fe36-4482-98a2-719cf2349108", 00:13:34.266 "is_configured": false, 00:13:34.266 "data_offset": 0, 00:13:34.266 "data_size": 65536 00:13:34.266 }, 00:13:34.266 { 00:13:34.266 "name": "BaseBdev4", 00:13:34.266 "uuid": "0f92552b-6009-4e95-bf12-d7808f3226fd", 00:13:34.266 "is_configured": true, 00:13:34.266 "data_offset": 0, 00:13:34.266 "data_size": 65536 00:13:34.266 } 00:13:34.266 ] 00:13:34.266 }' 00:13:34.266 16:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.266 16:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.525 16:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.525 16:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.525 16:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:34.525 16:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.525 16:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.784 16:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:34.784 16:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:34.784 16:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.784 16:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.784 [2024-11-05 16:27:47.645975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:34.784 16:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.784 16:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:34.784 16:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:34.784 16:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:34.784 16:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.784 16:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.784 16:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:34.784 16:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.784 16:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.784 16:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.784 16:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.784 16:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.784 16:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.784 16:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.784 16:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.784 16:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.784 16:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.784 "name": "Existed_Raid", 00:13:34.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.784 "strip_size_kb": 0, 00:13:34.784 "state": "configuring", 00:13:34.784 "raid_level": "raid1", 00:13:34.784 "superblock": false, 00:13:34.784 "num_base_bdevs": 4, 00:13:34.784 "num_base_bdevs_discovered": 3, 00:13:34.784 "num_base_bdevs_operational": 4, 00:13:34.784 "base_bdevs_list": [ 00:13:34.784 { 00:13:34.784 "name": "BaseBdev1", 00:13:34.784 "uuid": "21743748-ba5d-4f76-9adc-758784fb391e", 00:13:34.784 "is_configured": true, 00:13:34.784 "data_offset": 0, 00:13:34.784 "data_size": 65536 00:13:34.784 }, 00:13:34.784 { 00:13:34.784 "name": null, 00:13:34.784 "uuid": "d0422434-0d27-4264-822e-bdace5ddfc29", 00:13:34.784 "is_configured": false, 00:13:34.784 "data_offset": 0, 00:13:34.784 "data_size": 65536 00:13:34.784 }, 00:13:34.784 { 00:13:34.784 "name": "BaseBdev3", 00:13:34.784 "uuid": "72fc1e8a-fe36-4482-98a2-719cf2349108", 00:13:34.784 "is_configured": true, 00:13:34.784 "data_offset": 0, 00:13:34.784 "data_size": 65536 00:13:34.784 }, 00:13:34.784 { 00:13:34.784 "name": "BaseBdev4", 00:13:34.784 "uuid": "0f92552b-6009-4e95-bf12-d7808f3226fd", 00:13:34.784 "is_configured": true, 00:13:34.784 "data_offset": 0, 00:13:34.784 "data_size": 65536 00:13:34.784 } 00:13:34.784 ] 00:13:34.784 }' 00:13:34.784 16:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.784 16:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.044 16:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.044 16:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:35.044 16:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.044 16:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.303 16:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.303 16:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:35.303 16:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:35.303 16:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.303 16:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.303 [2024-11-05 16:27:48.173126] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:35.303 16:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.303 16:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:35.303 16:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:35.303 16:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:35.303 16:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.303 16:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.303 16:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:35.303 16:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.303 16:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.303 16:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.303 16:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.303 16:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.303 16:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.303 16:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.303 16:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.303 16:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.303 16:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.303 "name": "Existed_Raid", 00:13:35.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.303 "strip_size_kb": 0, 00:13:35.303 "state": "configuring", 00:13:35.303 "raid_level": "raid1", 00:13:35.303 "superblock": false, 00:13:35.303 "num_base_bdevs": 4, 00:13:35.303 "num_base_bdevs_discovered": 2, 00:13:35.303 "num_base_bdevs_operational": 4, 00:13:35.303 "base_bdevs_list": [ 00:13:35.303 { 00:13:35.303 "name": null, 00:13:35.303 "uuid": "21743748-ba5d-4f76-9adc-758784fb391e", 00:13:35.303 "is_configured": false, 00:13:35.303 "data_offset": 0, 00:13:35.303 "data_size": 65536 00:13:35.303 }, 00:13:35.303 { 00:13:35.303 "name": null, 00:13:35.303 "uuid": "d0422434-0d27-4264-822e-bdace5ddfc29", 00:13:35.303 "is_configured": false, 00:13:35.303 "data_offset": 0, 00:13:35.303 "data_size": 65536 00:13:35.303 }, 00:13:35.303 { 00:13:35.303 "name": "BaseBdev3", 00:13:35.303 "uuid": "72fc1e8a-fe36-4482-98a2-719cf2349108", 00:13:35.303 "is_configured": true, 00:13:35.303 "data_offset": 0, 00:13:35.303 "data_size": 65536 00:13:35.303 }, 00:13:35.303 { 00:13:35.303 "name": "BaseBdev4", 00:13:35.303 "uuid": "0f92552b-6009-4e95-bf12-d7808f3226fd", 00:13:35.303 "is_configured": true, 00:13:35.303 "data_offset": 0, 00:13:35.303 "data_size": 65536 00:13:35.303 } 00:13:35.303 ] 00:13:35.303 }' 00:13:35.303 16:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.303 16:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.871 16:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.871 16:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:35.871 16:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.871 16:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.871 16:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.871 16:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:35.871 16:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:35.871 16:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.871 16:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.871 [2024-11-05 16:27:48.810740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:35.871 16:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.871 16:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:35.871 16:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:35.871 16:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:35.871 16:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.871 16:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.871 16:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:35.871 16:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.871 16:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.872 16:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.872 16:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.872 16:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.872 16:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.872 16:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.872 16:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.872 16:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.872 16:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.872 "name": "Existed_Raid", 00:13:35.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.872 "strip_size_kb": 0, 00:13:35.872 "state": "configuring", 00:13:35.872 "raid_level": "raid1", 00:13:35.872 "superblock": false, 00:13:35.872 "num_base_bdevs": 4, 00:13:35.872 "num_base_bdevs_discovered": 3, 00:13:35.872 "num_base_bdevs_operational": 4, 00:13:35.872 "base_bdevs_list": [ 00:13:35.872 { 00:13:35.872 "name": null, 00:13:35.872 "uuid": "21743748-ba5d-4f76-9adc-758784fb391e", 00:13:35.872 "is_configured": false, 00:13:35.872 "data_offset": 0, 00:13:35.872 "data_size": 65536 00:13:35.872 }, 00:13:35.872 { 00:13:35.872 "name": "BaseBdev2", 00:13:35.872 "uuid": "d0422434-0d27-4264-822e-bdace5ddfc29", 00:13:35.872 "is_configured": true, 00:13:35.872 "data_offset": 0, 00:13:35.872 "data_size": 65536 00:13:35.872 }, 00:13:35.872 { 00:13:35.872 "name": "BaseBdev3", 00:13:35.872 "uuid": "72fc1e8a-fe36-4482-98a2-719cf2349108", 00:13:35.872 "is_configured": true, 00:13:35.872 "data_offset": 0, 00:13:35.872 "data_size": 65536 00:13:35.872 }, 00:13:35.872 { 00:13:35.872 "name": "BaseBdev4", 00:13:35.872 "uuid": "0f92552b-6009-4e95-bf12-d7808f3226fd", 00:13:35.872 "is_configured": true, 00:13:35.872 "data_offset": 0, 00:13:35.872 "data_size": 65536 00:13:35.872 } 00:13:35.872 ] 00:13:35.872 }' 00:13:35.872 16:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.872 16:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 21743748-ba5d-4f76-9adc-758784fb391e 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.448 [2024-11-05 16:27:49.418604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:36.448 [2024-11-05 16:27:49.418774] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:36.448 [2024-11-05 16:27:49.418807] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:36.448 [2024-11-05 16:27:49.419210] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:36.448 [2024-11-05 16:27:49.419468] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:36.448 [2024-11-05 16:27:49.419515] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:36.448 [2024-11-05 16:27:49.419879] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:36.448 NewBaseBdev 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.448 [ 00:13:36.448 { 00:13:36.448 "name": "NewBaseBdev", 00:13:36.448 "aliases": [ 00:13:36.448 "21743748-ba5d-4f76-9adc-758784fb391e" 00:13:36.448 ], 00:13:36.448 "product_name": "Malloc disk", 00:13:36.448 "block_size": 512, 00:13:36.448 "num_blocks": 65536, 00:13:36.448 "uuid": "21743748-ba5d-4f76-9adc-758784fb391e", 00:13:36.448 "assigned_rate_limits": { 00:13:36.448 "rw_ios_per_sec": 0, 00:13:36.448 "rw_mbytes_per_sec": 0, 00:13:36.448 "r_mbytes_per_sec": 0, 00:13:36.448 "w_mbytes_per_sec": 0 00:13:36.448 }, 00:13:36.448 "claimed": true, 00:13:36.448 "claim_type": "exclusive_write", 00:13:36.448 "zoned": false, 00:13:36.448 "supported_io_types": { 00:13:36.448 "read": true, 00:13:36.448 "write": true, 00:13:36.448 "unmap": true, 00:13:36.448 "flush": true, 00:13:36.448 "reset": true, 00:13:36.448 "nvme_admin": false, 00:13:36.448 "nvme_io": false, 00:13:36.448 "nvme_io_md": false, 00:13:36.448 "write_zeroes": true, 00:13:36.448 "zcopy": true, 00:13:36.448 "get_zone_info": false, 00:13:36.448 "zone_management": false, 00:13:36.448 "zone_append": false, 00:13:36.448 "compare": false, 00:13:36.448 "compare_and_write": false, 00:13:36.448 "abort": true, 00:13:36.448 "seek_hole": false, 00:13:36.448 "seek_data": false, 00:13:36.448 "copy": true, 00:13:36.448 "nvme_iov_md": false 00:13:36.448 }, 00:13:36.448 "memory_domains": [ 00:13:36.448 { 00:13:36.448 "dma_device_id": "system", 00:13:36.448 "dma_device_type": 1 00:13:36.448 }, 00:13:36.448 { 00:13:36.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.448 "dma_device_type": 2 00:13:36.448 } 00:13:36.448 ], 00:13:36.448 "driver_specific": {} 00:13:36.448 } 00:13:36.448 ] 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.448 16:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.448 "name": "Existed_Raid", 00:13:36.448 "uuid": "c78f519f-4d43-4333-8c91-156885036a21", 00:13:36.448 "strip_size_kb": 0, 00:13:36.449 "state": "online", 00:13:36.449 "raid_level": "raid1", 00:13:36.449 "superblock": false, 00:13:36.449 "num_base_bdevs": 4, 00:13:36.449 "num_base_bdevs_discovered": 4, 00:13:36.449 "num_base_bdevs_operational": 4, 00:13:36.449 "base_bdevs_list": [ 00:13:36.449 { 00:13:36.449 "name": "NewBaseBdev", 00:13:36.449 "uuid": "21743748-ba5d-4f76-9adc-758784fb391e", 00:13:36.449 "is_configured": true, 00:13:36.449 "data_offset": 0, 00:13:36.449 "data_size": 65536 00:13:36.449 }, 00:13:36.449 { 00:13:36.449 "name": "BaseBdev2", 00:13:36.449 "uuid": "d0422434-0d27-4264-822e-bdace5ddfc29", 00:13:36.449 "is_configured": true, 00:13:36.449 "data_offset": 0, 00:13:36.449 "data_size": 65536 00:13:36.449 }, 00:13:36.449 { 00:13:36.449 "name": "BaseBdev3", 00:13:36.449 "uuid": "72fc1e8a-fe36-4482-98a2-719cf2349108", 00:13:36.449 "is_configured": true, 00:13:36.449 "data_offset": 0, 00:13:36.449 "data_size": 65536 00:13:36.449 }, 00:13:36.449 { 00:13:36.449 "name": "BaseBdev4", 00:13:36.449 "uuid": "0f92552b-6009-4e95-bf12-d7808f3226fd", 00:13:36.449 "is_configured": true, 00:13:36.449 "data_offset": 0, 00:13:36.449 "data_size": 65536 00:13:36.449 } 00:13:36.449 ] 00:13:36.449 }' 00:13:36.449 16:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.449 16:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.019 16:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:37.019 16:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:37.019 16:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:37.019 16:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:37.019 16:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:37.019 16:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:37.019 16:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:37.019 16:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:37.019 16:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.019 16:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.019 [2024-11-05 16:27:49.950179] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:37.019 16:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.019 16:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:37.019 "name": "Existed_Raid", 00:13:37.019 "aliases": [ 00:13:37.019 "c78f519f-4d43-4333-8c91-156885036a21" 00:13:37.019 ], 00:13:37.019 "product_name": "Raid Volume", 00:13:37.019 "block_size": 512, 00:13:37.019 "num_blocks": 65536, 00:13:37.019 "uuid": "c78f519f-4d43-4333-8c91-156885036a21", 00:13:37.019 "assigned_rate_limits": { 00:13:37.019 "rw_ios_per_sec": 0, 00:13:37.020 "rw_mbytes_per_sec": 0, 00:13:37.020 "r_mbytes_per_sec": 0, 00:13:37.020 "w_mbytes_per_sec": 0 00:13:37.020 }, 00:13:37.020 "claimed": false, 00:13:37.020 "zoned": false, 00:13:37.020 "supported_io_types": { 00:13:37.020 "read": true, 00:13:37.020 "write": true, 00:13:37.020 "unmap": false, 00:13:37.020 "flush": false, 00:13:37.020 "reset": true, 00:13:37.020 "nvme_admin": false, 00:13:37.020 "nvme_io": false, 00:13:37.020 "nvme_io_md": false, 00:13:37.020 "write_zeroes": true, 00:13:37.020 "zcopy": false, 00:13:37.020 "get_zone_info": false, 00:13:37.020 "zone_management": false, 00:13:37.020 "zone_append": false, 00:13:37.020 "compare": false, 00:13:37.020 "compare_and_write": false, 00:13:37.020 "abort": false, 00:13:37.020 "seek_hole": false, 00:13:37.020 "seek_data": false, 00:13:37.020 "copy": false, 00:13:37.020 "nvme_iov_md": false 00:13:37.020 }, 00:13:37.020 "memory_domains": [ 00:13:37.020 { 00:13:37.020 "dma_device_id": "system", 00:13:37.020 "dma_device_type": 1 00:13:37.020 }, 00:13:37.020 { 00:13:37.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.020 "dma_device_type": 2 00:13:37.020 }, 00:13:37.020 { 00:13:37.020 "dma_device_id": "system", 00:13:37.020 "dma_device_type": 1 00:13:37.020 }, 00:13:37.020 { 00:13:37.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.020 "dma_device_type": 2 00:13:37.020 }, 00:13:37.020 { 00:13:37.020 "dma_device_id": "system", 00:13:37.020 "dma_device_type": 1 00:13:37.020 }, 00:13:37.020 { 00:13:37.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.020 "dma_device_type": 2 00:13:37.020 }, 00:13:37.020 { 00:13:37.020 "dma_device_id": "system", 00:13:37.020 "dma_device_type": 1 00:13:37.020 }, 00:13:37.020 { 00:13:37.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.020 "dma_device_type": 2 00:13:37.020 } 00:13:37.020 ], 00:13:37.020 "driver_specific": { 00:13:37.020 "raid": { 00:13:37.020 "uuid": "c78f519f-4d43-4333-8c91-156885036a21", 00:13:37.020 "strip_size_kb": 0, 00:13:37.020 "state": "online", 00:13:37.020 "raid_level": "raid1", 00:13:37.020 "superblock": false, 00:13:37.020 "num_base_bdevs": 4, 00:13:37.020 "num_base_bdevs_discovered": 4, 00:13:37.020 "num_base_bdevs_operational": 4, 00:13:37.020 "base_bdevs_list": [ 00:13:37.020 { 00:13:37.020 "name": "NewBaseBdev", 00:13:37.020 "uuid": "21743748-ba5d-4f76-9adc-758784fb391e", 00:13:37.020 "is_configured": true, 00:13:37.020 "data_offset": 0, 00:13:37.020 "data_size": 65536 00:13:37.020 }, 00:13:37.020 { 00:13:37.020 "name": "BaseBdev2", 00:13:37.020 "uuid": "d0422434-0d27-4264-822e-bdace5ddfc29", 00:13:37.020 "is_configured": true, 00:13:37.020 "data_offset": 0, 00:13:37.020 "data_size": 65536 00:13:37.020 }, 00:13:37.020 { 00:13:37.020 "name": "BaseBdev3", 00:13:37.020 "uuid": "72fc1e8a-fe36-4482-98a2-719cf2349108", 00:13:37.020 "is_configured": true, 00:13:37.020 "data_offset": 0, 00:13:37.020 "data_size": 65536 00:13:37.020 }, 00:13:37.020 { 00:13:37.020 "name": "BaseBdev4", 00:13:37.020 "uuid": "0f92552b-6009-4e95-bf12-d7808f3226fd", 00:13:37.020 "is_configured": true, 00:13:37.020 "data_offset": 0, 00:13:37.020 "data_size": 65536 00:13:37.020 } 00:13:37.020 ] 00:13:37.020 } 00:13:37.020 } 00:13:37.020 }' 00:13:37.020 16:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:37.020 16:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:37.020 BaseBdev2 00:13:37.020 BaseBdev3 00:13:37.020 BaseBdev4' 00:13:37.020 16:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.020 16:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:37.020 16:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:37.020 16:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:37.020 16:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.020 16:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.020 16:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.020 16:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.279 16:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:37.279 16:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:37.279 16:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:37.279 16:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:37.279 16:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.279 16:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.279 16:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.279 16:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.279 16:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:37.279 16:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:37.279 16:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:37.279 16:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:37.279 16:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.279 16:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.279 16:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.279 16:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.279 16:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:37.279 16:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:37.279 16:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:37.279 16:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:37.279 16:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.279 16:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.279 16:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.279 16:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.279 16:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:37.279 16:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:37.279 16:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:37.279 16:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.279 16:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.279 [2024-11-05 16:27:50.289228] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:37.279 [2024-11-05 16:27:50.289347] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:37.279 [2024-11-05 16:27:50.289469] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:37.279 [2024-11-05 16:27:50.289822] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:37.279 [2024-11-05 16:27:50.289841] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:37.280 16:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.280 16:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73507 00:13:37.280 16:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 73507 ']' 00:13:37.280 16:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 73507 00:13:37.280 16:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:13:37.280 16:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:37.280 16:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73507 00:13:37.280 killing process with pid 73507 00:13:37.280 16:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:37.280 16:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:37.280 16:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73507' 00:13:37.280 16:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 73507 00:13:37.280 [2024-11-05 16:27:50.338356] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:37.280 16:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 73507 00:13:37.846 [2024-11-05 16:27:50.751952] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:39.220 16:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:39.220 00:13:39.220 real 0m12.213s 00:13:39.220 user 0m19.081s 00:13:39.220 sys 0m2.332s 00:13:39.220 16:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:39.220 16:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.220 ************************************ 00:13:39.220 END TEST raid_state_function_test 00:13:39.220 ************************************ 00:13:39.220 16:27:52 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:13:39.220 16:27:52 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:39.220 16:27:52 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:39.220 16:27:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:39.220 ************************************ 00:13:39.220 START TEST raid_state_function_test_sb 00:13:39.220 ************************************ 00:13:39.220 16:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 4 true 00:13:39.220 16:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:39.220 16:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:39.220 16:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:39.220 16:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:39.220 16:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:39.220 16:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:39.220 16:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:39.220 16:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:39.220 16:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:39.220 16:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:39.220 16:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:39.220 16:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:39.220 16:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:39.220 16:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:39.220 16:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:39.220 16:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:39.220 16:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:39.220 16:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:39.220 16:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:39.220 16:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:39.220 16:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:39.220 16:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:39.220 16:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:39.220 16:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:39.220 16:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:39.220 16:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:39.220 16:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:39.220 16:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:39.221 16:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74184 00:13:39.221 16:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:39.221 16:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74184' 00:13:39.221 Process raid pid: 74184 00:13:39.221 16:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74184 00:13:39.221 16:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 74184 ']' 00:13:39.221 16:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.221 16:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:39.221 16:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.221 16:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:39.221 16:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.221 [2024-11-05 16:27:52.130603] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:13:39.221 [2024-11-05 16:27:52.131261] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:39.478 [2024-11-05 16:27:52.310695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.478 [2024-11-05 16:27:52.438364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.735 [2024-11-05 16:27:52.668947] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:39.735 [2024-11-05 16:27:52.669083] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:39.992 16:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:39.992 16:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:13:39.992 16:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:39.992 16:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.992 16:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.250 [2024-11-05 16:27:53.085744] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:40.250 [2024-11-05 16:27:53.085876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:40.250 [2024-11-05 16:27:53.085917] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:40.250 [2024-11-05 16:27:53.085948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:40.250 [2024-11-05 16:27:53.086025] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:40.250 [2024-11-05 16:27:53.086053] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:40.250 [2024-11-05 16:27:53.086115] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:40.250 [2024-11-05 16:27:53.086174] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:40.250 16:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.250 16:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:40.250 16:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:40.250 16:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:40.250 16:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:40.250 16:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:40.250 16:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:40.250 16:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.250 16:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.250 16:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.250 16:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.250 16:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.250 16:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.250 16:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.250 16:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.250 16:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.250 16:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.250 "name": "Existed_Raid", 00:13:40.250 "uuid": "44552f71-419c-4c2f-b0c7-c0fb8628f310", 00:13:40.250 "strip_size_kb": 0, 00:13:40.250 "state": "configuring", 00:13:40.250 "raid_level": "raid1", 00:13:40.250 "superblock": true, 00:13:40.250 "num_base_bdevs": 4, 00:13:40.250 "num_base_bdevs_discovered": 0, 00:13:40.250 "num_base_bdevs_operational": 4, 00:13:40.250 "base_bdevs_list": [ 00:13:40.250 { 00:13:40.250 "name": "BaseBdev1", 00:13:40.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.250 "is_configured": false, 00:13:40.250 "data_offset": 0, 00:13:40.250 "data_size": 0 00:13:40.250 }, 00:13:40.250 { 00:13:40.250 "name": "BaseBdev2", 00:13:40.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.250 "is_configured": false, 00:13:40.250 "data_offset": 0, 00:13:40.250 "data_size": 0 00:13:40.250 }, 00:13:40.250 { 00:13:40.250 "name": "BaseBdev3", 00:13:40.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.250 "is_configured": false, 00:13:40.250 "data_offset": 0, 00:13:40.250 "data_size": 0 00:13:40.250 }, 00:13:40.250 { 00:13:40.250 "name": "BaseBdev4", 00:13:40.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.251 "is_configured": false, 00:13:40.251 "data_offset": 0, 00:13:40.251 "data_size": 0 00:13:40.251 } 00:13:40.251 ] 00:13:40.251 }' 00:13:40.251 16:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.251 16:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.508 16:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:40.508 16:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.508 16:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.508 [2024-11-05 16:27:53.532993] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:40.508 [2024-11-05 16:27:53.533095] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:40.508 16:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.508 16:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:40.508 16:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.508 16:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.508 [2024-11-05 16:27:53.544986] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:40.508 [2024-11-05 16:27:53.545036] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:40.508 [2024-11-05 16:27:53.545047] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:40.508 [2024-11-05 16:27:53.545059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:40.508 [2024-11-05 16:27:53.545066] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:40.508 [2024-11-05 16:27:53.545077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:40.508 [2024-11-05 16:27:53.545084] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:40.508 [2024-11-05 16:27:53.545095] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:40.508 16:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.508 16:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:40.508 16:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.508 16:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.766 [2024-11-05 16:27:53.601217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:40.766 BaseBdev1 00:13:40.766 16:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.766 16:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:40.766 16:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:13:40.766 16:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:40.766 16:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:40.766 16:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:40.766 16:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:40.766 16:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:40.766 16:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.766 16:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.766 16:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.766 16:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:40.766 16:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.766 16:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.766 [ 00:13:40.766 { 00:13:40.766 "name": "BaseBdev1", 00:13:40.766 "aliases": [ 00:13:40.766 "003c3b11-2d70-4fbb-a05e-1cc74890acb2" 00:13:40.766 ], 00:13:40.766 "product_name": "Malloc disk", 00:13:40.766 "block_size": 512, 00:13:40.766 "num_blocks": 65536, 00:13:40.766 "uuid": "003c3b11-2d70-4fbb-a05e-1cc74890acb2", 00:13:40.766 "assigned_rate_limits": { 00:13:40.766 "rw_ios_per_sec": 0, 00:13:40.766 "rw_mbytes_per_sec": 0, 00:13:40.766 "r_mbytes_per_sec": 0, 00:13:40.766 "w_mbytes_per_sec": 0 00:13:40.766 }, 00:13:40.766 "claimed": true, 00:13:40.766 "claim_type": "exclusive_write", 00:13:40.766 "zoned": false, 00:13:40.766 "supported_io_types": { 00:13:40.766 "read": true, 00:13:40.766 "write": true, 00:13:40.766 "unmap": true, 00:13:40.766 "flush": true, 00:13:40.766 "reset": true, 00:13:40.766 "nvme_admin": false, 00:13:40.766 "nvme_io": false, 00:13:40.766 "nvme_io_md": false, 00:13:40.766 "write_zeroes": true, 00:13:40.766 "zcopy": true, 00:13:40.766 "get_zone_info": false, 00:13:40.766 "zone_management": false, 00:13:40.766 "zone_append": false, 00:13:40.766 "compare": false, 00:13:40.766 "compare_and_write": false, 00:13:40.766 "abort": true, 00:13:40.766 "seek_hole": false, 00:13:40.766 "seek_data": false, 00:13:40.766 "copy": true, 00:13:40.766 "nvme_iov_md": false 00:13:40.766 }, 00:13:40.766 "memory_domains": [ 00:13:40.766 { 00:13:40.766 "dma_device_id": "system", 00:13:40.766 "dma_device_type": 1 00:13:40.766 }, 00:13:40.766 { 00:13:40.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.766 "dma_device_type": 2 00:13:40.766 } 00:13:40.766 ], 00:13:40.766 "driver_specific": {} 00:13:40.766 } 00:13:40.766 ] 00:13:40.766 16:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.766 16:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:40.766 16:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:40.766 16:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:40.766 16:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:40.766 16:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:40.767 16:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:40.767 16:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:40.767 16:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.767 16:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.767 16:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.767 16:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.767 16:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.767 16:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.767 16:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.767 16:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.767 16:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.767 16:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.767 "name": "Existed_Raid", 00:13:40.767 "uuid": "ec50b80b-922b-4126-bcd2-246e7e150b1c", 00:13:40.767 "strip_size_kb": 0, 00:13:40.767 "state": "configuring", 00:13:40.767 "raid_level": "raid1", 00:13:40.767 "superblock": true, 00:13:40.767 "num_base_bdevs": 4, 00:13:40.767 "num_base_bdevs_discovered": 1, 00:13:40.767 "num_base_bdevs_operational": 4, 00:13:40.767 "base_bdevs_list": [ 00:13:40.767 { 00:13:40.767 "name": "BaseBdev1", 00:13:40.767 "uuid": "003c3b11-2d70-4fbb-a05e-1cc74890acb2", 00:13:40.767 "is_configured": true, 00:13:40.767 "data_offset": 2048, 00:13:40.767 "data_size": 63488 00:13:40.767 }, 00:13:40.767 { 00:13:40.767 "name": "BaseBdev2", 00:13:40.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.767 "is_configured": false, 00:13:40.767 "data_offset": 0, 00:13:40.767 "data_size": 0 00:13:40.767 }, 00:13:40.767 { 00:13:40.767 "name": "BaseBdev3", 00:13:40.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.767 "is_configured": false, 00:13:40.767 "data_offset": 0, 00:13:40.767 "data_size": 0 00:13:40.767 }, 00:13:40.767 { 00:13:40.767 "name": "BaseBdev4", 00:13:40.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.767 "is_configured": false, 00:13:40.767 "data_offset": 0, 00:13:40.767 "data_size": 0 00:13:40.767 } 00:13:40.767 ] 00:13:40.767 }' 00:13:40.767 16:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.767 16:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.024 16:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:41.024 16:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.024 16:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.024 [2024-11-05 16:27:54.064677] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:41.024 [2024-11-05 16:27:54.064805] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:41.024 16:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.024 16:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:41.024 16:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.024 16:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.024 [2024-11-05 16:27:54.072733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:41.024 [2024-11-05 16:27:54.074901] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:41.024 [2024-11-05 16:27:54.074988] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:41.024 [2024-11-05 16:27:54.075024] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:41.024 [2024-11-05 16:27:54.075054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:41.024 [2024-11-05 16:27:54.075078] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:41.025 [2024-11-05 16:27:54.075103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:41.025 16:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.025 16:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:41.025 16:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:41.025 16:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:41.025 16:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:41.025 16:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:41.025 16:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.025 16:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.025 16:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:41.025 16:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.025 16:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.025 16:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.025 16:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.025 16:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.025 16:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.025 16:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.025 16:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.025 16:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.282 16:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.282 "name": "Existed_Raid", 00:13:41.282 "uuid": "bb5e3306-102c-4ff4-8061-64005b7f5d26", 00:13:41.282 "strip_size_kb": 0, 00:13:41.282 "state": "configuring", 00:13:41.282 "raid_level": "raid1", 00:13:41.282 "superblock": true, 00:13:41.282 "num_base_bdevs": 4, 00:13:41.282 "num_base_bdevs_discovered": 1, 00:13:41.282 "num_base_bdevs_operational": 4, 00:13:41.282 "base_bdevs_list": [ 00:13:41.282 { 00:13:41.282 "name": "BaseBdev1", 00:13:41.282 "uuid": "003c3b11-2d70-4fbb-a05e-1cc74890acb2", 00:13:41.282 "is_configured": true, 00:13:41.282 "data_offset": 2048, 00:13:41.282 "data_size": 63488 00:13:41.282 }, 00:13:41.282 { 00:13:41.282 "name": "BaseBdev2", 00:13:41.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.282 "is_configured": false, 00:13:41.282 "data_offset": 0, 00:13:41.282 "data_size": 0 00:13:41.282 }, 00:13:41.282 { 00:13:41.282 "name": "BaseBdev3", 00:13:41.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.282 "is_configured": false, 00:13:41.282 "data_offset": 0, 00:13:41.282 "data_size": 0 00:13:41.282 }, 00:13:41.282 { 00:13:41.282 "name": "BaseBdev4", 00:13:41.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.282 "is_configured": false, 00:13:41.282 "data_offset": 0, 00:13:41.282 "data_size": 0 00:13:41.282 } 00:13:41.282 ] 00:13:41.282 }' 00:13:41.282 16:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.282 16:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.540 16:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:41.540 16:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.540 16:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.540 [2024-11-05 16:27:54.520812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:41.540 BaseBdev2 00:13:41.540 16:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.540 16:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:41.540 16:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:41.540 16:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:41.540 16:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:41.540 16:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:41.540 16:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:41.540 16:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:41.540 16:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.540 16:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.540 16:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.540 16:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:41.540 16:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.540 16:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.540 [ 00:13:41.540 { 00:13:41.540 "name": "BaseBdev2", 00:13:41.540 "aliases": [ 00:13:41.540 "488b1b98-b756-42fd-95c8-420936bb29d3" 00:13:41.540 ], 00:13:41.540 "product_name": "Malloc disk", 00:13:41.540 "block_size": 512, 00:13:41.540 "num_blocks": 65536, 00:13:41.540 "uuid": "488b1b98-b756-42fd-95c8-420936bb29d3", 00:13:41.540 "assigned_rate_limits": { 00:13:41.540 "rw_ios_per_sec": 0, 00:13:41.540 "rw_mbytes_per_sec": 0, 00:13:41.540 "r_mbytes_per_sec": 0, 00:13:41.540 "w_mbytes_per_sec": 0 00:13:41.540 }, 00:13:41.540 "claimed": true, 00:13:41.540 "claim_type": "exclusive_write", 00:13:41.540 "zoned": false, 00:13:41.540 "supported_io_types": { 00:13:41.540 "read": true, 00:13:41.540 "write": true, 00:13:41.540 "unmap": true, 00:13:41.540 "flush": true, 00:13:41.540 "reset": true, 00:13:41.540 "nvme_admin": false, 00:13:41.540 "nvme_io": false, 00:13:41.540 "nvme_io_md": false, 00:13:41.540 "write_zeroes": true, 00:13:41.540 "zcopy": true, 00:13:41.540 "get_zone_info": false, 00:13:41.540 "zone_management": false, 00:13:41.540 "zone_append": false, 00:13:41.540 "compare": false, 00:13:41.540 "compare_and_write": false, 00:13:41.540 "abort": true, 00:13:41.540 "seek_hole": false, 00:13:41.540 "seek_data": false, 00:13:41.540 "copy": true, 00:13:41.540 "nvme_iov_md": false 00:13:41.540 }, 00:13:41.540 "memory_domains": [ 00:13:41.540 { 00:13:41.540 "dma_device_id": "system", 00:13:41.540 "dma_device_type": 1 00:13:41.540 }, 00:13:41.540 { 00:13:41.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.540 "dma_device_type": 2 00:13:41.540 } 00:13:41.540 ], 00:13:41.540 "driver_specific": {} 00:13:41.540 } 00:13:41.540 ] 00:13:41.540 16:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.540 16:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:41.540 16:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:41.540 16:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:41.540 16:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:41.540 16:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:41.540 16:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:41.540 16:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.540 16:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.540 16:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:41.540 16:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.540 16:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.540 16:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.540 16:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.540 16:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.540 16:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.540 16:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.540 16:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.540 16:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.540 16:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.540 "name": "Existed_Raid", 00:13:41.540 "uuid": "bb5e3306-102c-4ff4-8061-64005b7f5d26", 00:13:41.540 "strip_size_kb": 0, 00:13:41.540 "state": "configuring", 00:13:41.540 "raid_level": "raid1", 00:13:41.540 "superblock": true, 00:13:41.540 "num_base_bdevs": 4, 00:13:41.540 "num_base_bdevs_discovered": 2, 00:13:41.540 "num_base_bdevs_operational": 4, 00:13:41.540 "base_bdevs_list": [ 00:13:41.540 { 00:13:41.540 "name": "BaseBdev1", 00:13:41.540 "uuid": "003c3b11-2d70-4fbb-a05e-1cc74890acb2", 00:13:41.540 "is_configured": true, 00:13:41.540 "data_offset": 2048, 00:13:41.540 "data_size": 63488 00:13:41.540 }, 00:13:41.540 { 00:13:41.540 "name": "BaseBdev2", 00:13:41.540 "uuid": "488b1b98-b756-42fd-95c8-420936bb29d3", 00:13:41.540 "is_configured": true, 00:13:41.540 "data_offset": 2048, 00:13:41.540 "data_size": 63488 00:13:41.540 }, 00:13:41.540 { 00:13:41.540 "name": "BaseBdev3", 00:13:41.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.540 "is_configured": false, 00:13:41.540 "data_offset": 0, 00:13:41.540 "data_size": 0 00:13:41.540 }, 00:13:41.540 { 00:13:41.540 "name": "BaseBdev4", 00:13:41.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.540 "is_configured": false, 00:13:41.540 "data_offset": 0, 00:13:41.540 "data_size": 0 00:13:41.540 } 00:13:41.540 ] 00:13:41.540 }' 00:13:41.540 16:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.540 16:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.106 16:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:42.106 16:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.106 16:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.106 [2024-11-05 16:27:55.022475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:42.106 BaseBdev3 00:13:42.106 16:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.106 16:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:42.106 16:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:13:42.106 16:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:42.106 16:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:42.106 16:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:42.106 16:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:42.106 16:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:42.106 16:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.106 16:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.106 16:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.106 16:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:42.106 16:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.106 16:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.106 [ 00:13:42.106 { 00:13:42.106 "name": "BaseBdev3", 00:13:42.106 "aliases": [ 00:13:42.106 "39da41d1-7e15-4311-8a20-954b87e398b2" 00:13:42.106 ], 00:13:42.106 "product_name": "Malloc disk", 00:13:42.106 "block_size": 512, 00:13:42.106 "num_blocks": 65536, 00:13:42.106 "uuid": "39da41d1-7e15-4311-8a20-954b87e398b2", 00:13:42.106 "assigned_rate_limits": { 00:13:42.107 "rw_ios_per_sec": 0, 00:13:42.107 "rw_mbytes_per_sec": 0, 00:13:42.107 "r_mbytes_per_sec": 0, 00:13:42.107 "w_mbytes_per_sec": 0 00:13:42.107 }, 00:13:42.107 "claimed": true, 00:13:42.107 "claim_type": "exclusive_write", 00:13:42.107 "zoned": false, 00:13:42.107 "supported_io_types": { 00:13:42.107 "read": true, 00:13:42.107 "write": true, 00:13:42.107 "unmap": true, 00:13:42.107 "flush": true, 00:13:42.107 "reset": true, 00:13:42.107 "nvme_admin": false, 00:13:42.107 "nvme_io": false, 00:13:42.107 "nvme_io_md": false, 00:13:42.107 "write_zeroes": true, 00:13:42.107 "zcopy": true, 00:13:42.107 "get_zone_info": false, 00:13:42.107 "zone_management": false, 00:13:42.107 "zone_append": false, 00:13:42.107 "compare": false, 00:13:42.107 "compare_and_write": false, 00:13:42.107 "abort": true, 00:13:42.107 "seek_hole": false, 00:13:42.107 "seek_data": false, 00:13:42.107 "copy": true, 00:13:42.107 "nvme_iov_md": false 00:13:42.107 }, 00:13:42.107 "memory_domains": [ 00:13:42.107 { 00:13:42.107 "dma_device_id": "system", 00:13:42.107 "dma_device_type": 1 00:13:42.107 }, 00:13:42.107 { 00:13:42.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.107 "dma_device_type": 2 00:13:42.107 } 00:13:42.107 ], 00:13:42.107 "driver_specific": {} 00:13:42.107 } 00:13:42.107 ] 00:13:42.107 16:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.107 16:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:42.107 16:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:42.107 16:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:42.107 16:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:42.107 16:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:42.107 16:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:42.107 16:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.107 16:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.107 16:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:42.107 16:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.107 16:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.107 16:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.107 16:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.107 16:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.107 16:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.107 16:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.107 16:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.107 16:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.107 16:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.107 "name": "Existed_Raid", 00:13:42.107 "uuid": "bb5e3306-102c-4ff4-8061-64005b7f5d26", 00:13:42.107 "strip_size_kb": 0, 00:13:42.107 "state": "configuring", 00:13:42.107 "raid_level": "raid1", 00:13:42.107 "superblock": true, 00:13:42.107 "num_base_bdevs": 4, 00:13:42.107 "num_base_bdevs_discovered": 3, 00:13:42.107 "num_base_bdevs_operational": 4, 00:13:42.107 "base_bdevs_list": [ 00:13:42.107 { 00:13:42.107 "name": "BaseBdev1", 00:13:42.107 "uuid": "003c3b11-2d70-4fbb-a05e-1cc74890acb2", 00:13:42.107 "is_configured": true, 00:13:42.107 "data_offset": 2048, 00:13:42.107 "data_size": 63488 00:13:42.107 }, 00:13:42.107 { 00:13:42.107 "name": "BaseBdev2", 00:13:42.107 "uuid": "488b1b98-b756-42fd-95c8-420936bb29d3", 00:13:42.107 "is_configured": true, 00:13:42.107 "data_offset": 2048, 00:13:42.107 "data_size": 63488 00:13:42.107 }, 00:13:42.107 { 00:13:42.107 "name": "BaseBdev3", 00:13:42.107 "uuid": "39da41d1-7e15-4311-8a20-954b87e398b2", 00:13:42.107 "is_configured": true, 00:13:42.107 "data_offset": 2048, 00:13:42.107 "data_size": 63488 00:13:42.107 }, 00:13:42.107 { 00:13:42.107 "name": "BaseBdev4", 00:13:42.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.107 "is_configured": false, 00:13:42.107 "data_offset": 0, 00:13:42.107 "data_size": 0 00:13:42.107 } 00:13:42.107 ] 00:13:42.107 }' 00:13:42.107 16:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.107 16:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.675 16:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:42.675 16:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.675 16:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.675 [2024-11-05 16:27:55.561036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:42.675 [2024-11-05 16:27:55.561416] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:42.675 [2024-11-05 16:27:55.561476] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:42.675 [2024-11-05 16:27:55.561846] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:42.675 BaseBdev4 00:13:42.675 [2024-11-05 16:27:55.562087] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:42.675 [2024-11-05 16:27:55.562143] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:42.675 16:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.675 [2024-11-05 16:27:55.562371] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.675 16:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:42.675 16:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:13:42.675 16:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:42.675 16:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:42.675 16:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:42.675 16:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:42.675 16:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:42.675 16:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.675 16:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.675 16:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.675 16:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:42.675 16:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.675 16:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.675 [ 00:13:42.675 { 00:13:42.675 "name": "BaseBdev4", 00:13:42.675 "aliases": [ 00:13:42.675 "b282ea90-78f9-44b5-aefd-24417b45d73c" 00:13:42.675 ], 00:13:42.675 "product_name": "Malloc disk", 00:13:42.675 "block_size": 512, 00:13:42.675 "num_blocks": 65536, 00:13:42.675 "uuid": "b282ea90-78f9-44b5-aefd-24417b45d73c", 00:13:42.675 "assigned_rate_limits": { 00:13:42.675 "rw_ios_per_sec": 0, 00:13:42.675 "rw_mbytes_per_sec": 0, 00:13:42.675 "r_mbytes_per_sec": 0, 00:13:42.675 "w_mbytes_per_sec": 0 00:13:42.675 }, 00:13:42.675 "claimed": true, 00:13:42.675 "claim_type": "exclusive_write", 00:13:42.675 "zoned": false, 00:13:42.675 "supported_io_types": { 00:13:42.675 "read": true, 00:13:42.675 "write": true, 00:13:42.675 "unmap": true, 00:13:42.675 "flush": true, 00:13:42.675 "reset": true, 00:13:42.675 "nvme_admin": false, 00:13:42.675 "nvme_io": false, 00:13:42.675 "nvme_io_md": false, 00:13:42.675 "write_zeroes": true, 00:13:42.675 "zcopy": true, 00:13:42.675 "get_zone_info": false, 00:13:42.675 "zone_management": false, 00:13:42.675 "zone_append": false, 00:13:42.675 "compare": false, 00:13:42.675 "compare_and_write": false, 00:13:42.675 "abort": true, 00:13:42.675 "seek_hole": false, 00:13:42.675 "seek_data": false, 00:13:42.675 "copy": true, 00:13:42.675 "nvme_iov_md": false 00:13:42.675 }, 00:13:42.675 "memory_domains": [ 00:13:42.676 { 00:13:42.676 "dma_device_id": "system", 00:13:42.676 "dma_device_type": 1 00:13:42.676 }, 00:13:42.676 { 00:13:42.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.676 "dma_device_type": 2 00:13:42.676 } 00:13:42.676 ], 00:13:42.676 "driver_specific": {} 00:13:42.676 } 00:13:42.676 ] 00:13:42.676 16:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.676 16:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:42.676 16:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:42.676 16:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:42.676 16:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:42.676 16:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:42.676 16:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.676 16:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.676 16:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.676 16:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:42.676 16:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.676 16:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.676 16:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.676 16:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.676 16:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.676 16:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.676 16:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.676 16:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.676 16:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.676 16:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.676 "name": "Existed_Raid", 00:13:42.676 "uuid": "bb5e3306-102c-4ff4-8061-64005b7f5d26", 00:13:42.676 "strip_size_kb": 0, 00:13:42.676 "state": "online", 00:13:42.676 "raid_level": "raid1", 00:13:42.676 "superblock": true, 00:13:42.676 "num_base_bdevs": 4, 00:13:42.676 "num_base_bdevs_discovered": 4, 00:13:42.676 "num_base_bdevs_operational": 4, 00:13:42.676 "base_bdevs_list": [ 00:13:42.676 { 00:13:42.676 "name": "BaseBdev1", 00:13:42.676 "uuid": "003c3b11-2d70-4fbb-a05e-1cc74890acb2", 00:13:42.676 "is_configured": true, 00:13:42.676 "data_offset": 2048, 00:13:42.676 "data_size": 63488 00:13:42.676 }, 00:13:42.676 { 00:13:42.676 "name": "BaseBdev2", 00:13:42.676 "uuid": "488b1b98-b756-42fd-95c8-420936bb29d3", 00:13:42.676 "is_configured": true, 00:13:42.676 "data_offset": 2048, 00:13:42.676 "data_size": 63488 00:13:42.676 }, 00:13:42.676 { 00:13:42.676 "name": "BaseBdev3", 00:13:42.676 "uuid": "39da41d1-7e15-4311-8a20-954b87e398b2", 00:13:42.676 "is_configured": true, 00:13:42.676 "data_offset": 2048, 00:13:42.676 "data_size": 63488 00:13:42.676 }, 00:13:42.676 { 00:13:42.676 "name": "BaseBdev4", 00:13:42.676 "uuid": "b282ea90-78f9-44b5-aefd-24417b45d73c", 00:13:42.676 "is_configured": true, 00:13:42.676 "data_offset": 2048, 00:13:42.676 "data_size": 63488 00:13:42.676 } 00:13:42.676 ] 00:13:42.676 }' 00:13:42.676 16:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.676 16:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.934 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:42.934 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:42.934 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:42.934 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:42.934 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:42.934 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:42.934 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:43.193 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:43.193 16:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.193 16:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.193 [2024-11-05 16:27:56.029054] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:43.193 16:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.193 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:43.193 "name": "Existed_Raid", 00:13:43.193 "aliases": [ 00:13:43.193 "bb5e3306-102c-4ff4-8061-64005b7f5d26" 00:13:43.193 ], 00:13:43.193 "product_name": "Raid Volume", 00:13:43.193 "block_size": 512, 00:13:43.193 "num_blocks": 63488, 00:13:43.193 "uuid": "bb5e3306-102c-4ff4-8061-64005b7f5d26", 00:13:43.193 "assigned_rate_limits": { 00:13:43.193 "rw_ios_per_sec": 0, 00:13:43.193 "rw_mbytes_per_sec": 0, 00:13:43.193 "r_mbytes_per_sec": 0, 00:13:43.193 "w_mbytes_per_sec": 0 00:13:43.193 }, 00:13:43.193 "claimed": false, 00:13:43.193 "zoned": false, 00:13:43.193 "supported_io_types": { 00:13:43.193 "read": true, 00:13:43.193 "write": true, 00:13:43.193 "unmap": false, 00:13:43.193 "flush": false, 00:13:43.193 "reset": true, 00:13:43.193 "nvme_admin": false, 00:13:43.193 "nvme_io": false, 00:13:43.193 "nvme_io_md": false, 00:13:43.193 "write_zeroes": true, 00:13:43.193 "zcopy": false, 00:13:43.193 "get_zone_info": false, 00:13:43.193 "zone_management": false, 00:13:43.193 "zone_append": false, 00:13:43.193 "compare": false, 00:13:43.193 "compare_and_write": false, 00:13:43.193 "abort": false, 00:13:43.193 "seek_hole": false, 00:13:43.193 "seek_data": false, 00:13:43.193 "copy": false, 00:13:43.193 "nvme_iov_md": false 00:13:43.193 }, 00:13:43.193 "memory_domains": [ 00:13:43.193 { 00:13:43.193 "dma_device_id": "system", 00:13:43.193 "dma_device_type": 1 00:13:43.193 }, 00:13:43.193 { 00:13:43.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.193 "dma_device_type": 2 00:13:43.193 }, 00:13:43.193 { 00:13:43.193 "dma_device_id": "system", 00:13:43.193 "dma_device_type": 1 00:13:43.193 }, 00:13:43.193 { 00:13:43.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.193 "dma_device_type": 2 00:13:43.193 }, 00:13:43.193 { 00:13:43.193 "dma_device_id": "system", 00:13:43.193 "dma_device_type": 1 00:13:43.193 }, 00:13:43.193 { 00:13:43.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.193 "dma_device_type": 2 00:13:43.193 }, 00:13:43.193 { 00:13:43.193 "dma_device_id": "system", 00:13:43.193 "dma_device_type": 1 00:13:43.193 }, 00:13:43.193 { 00:13:43.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.193 "dma_device_type": 2 00:13:43.193 } 00:13:43.193 ], 00:13:43.193 "driver_specific": { 00:13:43.193 "raid": { 00:13:43.193 "uuid": "bb5e3306-102c-4ff4-8061-64005b7f5d26", 00:13:43.193 "strip_size_kb": 0, 00:13:43.193 "state": "online", 00:13:43.193 "raid_level": "raid1", 00:13:43.193 "superblock": true, 00:13:43.193 "num_base_bdevs": 4, 00:13:43.193 "num_base_bdevs_discovered": 4, 00:13:43.193 "num_base_bdevs_operational": 4, 00:13:43.193 "base_bdevs_list": [ 00:13:43.193 { 00:13:43.193 "name": "BaseBdev1", 00:13:43.193 "uuid": "003c3b11-2d70-4fbb-a05e-1cc74890acb2", 00:13:43.193 "is_configured": true, 00:13:43.193 "data_offset": 2048, 00:13:43.193 "data_size": 63488 00:13:43.193 }, 00:13:43.193 { 00:13:43.193 "name": "BaseBdev2", 00:13:43.193 "uuid": "488b1b98-b756-42fd-95c8-420936bb29d3", 00:13:43.193 "is_configured": true, 00:13:43.193 "data_offset": 2048, 00:13:43.193 "data_size": 63488 00:13:43.193 }, 00:13:43.193 { 00:13:43.193 "name": "BaseBdev3", 00:13:43.193 "uuid": "39da41d1-7e15-4311-8a20-954b87e398b2", 00:13:43.193 "is_configured": true, 00:13:43.193 "data_offset": 2048, 00:13:43.193 "data_size": 63488 00:13:43.193 }, 00:13:43.193 { 00:13:43.193 "name": "BaseBdev4", 00:13:43.193 "uuid": "b282ea90-78f9-44b5-aefd-24417b45d73c", 00:13:43.193 "is_configured": true, 00:13:43.193 "data_offset": 2048, 00:13:43.193 "data_size": 63488 00:13:43.193 } 00:13:43.193 ] 00:13:43.193 } 00:13:43.193 } 00:13:43.193 }' 00:13:43.193 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:43.194 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:43.194 BaseBdev2 00:13:43.194 BaseBdev3 00:13:43.194 BaseBdev4' 00:13:43.194 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.194 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:43.194 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.194 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:43.194 16:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.194 16:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.194 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.194 16:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.194 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.194 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.194 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.194 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.194 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:43.194 16:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.194 16:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.194 16:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.194 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.194 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.194 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.194 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:43.194 16:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.194 16:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.194 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.194 16:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.194 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.194 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.194 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.194 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:43.194 16:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.194 16:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.194 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.452 16:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.452 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.452 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.452 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:43.452 16:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.452 16:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.452 [2024-11-05 16:27:56.312769] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:43.452 16:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.452 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:43.452 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:43.452 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:43.452 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:43.452 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:43.452 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:43.452 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:43.452 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.452 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.452 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.452 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:43.452 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.452 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.452 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.452 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.452 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.452 16:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.452 16:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.452 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.452 16:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.452 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.452 "name": "Existed_Raid", 00:13:43.452 "uuid": "bb5e3306-102c-4ff4-8061-64005b7f5d26", 00:13:43.452 "strip_size_kb": 0, 00:13:43.452 "state": "online", 00:13:43.452 "raid_level": "raid1", 00:13:43.452 "superblock": true, 00:13:43.452 "num_base_bdevs": 4, 00:13:43.452 "num_base_bdevs_discovered": 3, 00:13:43.452 "num_base_bdevs_operational": 3, 00:13:43.452 "base_bdevs_list": [ 00:13:43.452 { 00:13:43.452 "name": null, 00:13:43.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.452 "is_configured": false, 00:13:43.452 "data_offset": 0, 00:13:43.452 "data_size": 63488 00:13:43.452 }, 00:13:43.452 { 00:13:43.452 "name": "BaseBdev2", 00:13:43.452 "uuid": "488b1b98-b756-42fd-95c8-420936bb29d3", 00:13:43.452 "is_configured": true, 00:13:43.453 "data_offset": 2048, 00:13:43.453 "data_size": 63488 00:13:43.453 }, 00:13:43.453 { 00:13:43.453 "name": "BaseBdev3", 00:13:43.453 "uuid": "39da41d1-7e15-4311-8a20-954b87e398b2", 00:13:43.453 "is_configured": true, 00:13:43.453 "data_offset": 2048, 00:13:43.453 "data_size": 63488 00:13:43.453 }, 00:13:43.453 { 00:13:43.453 "name": "BaseBdev4", 00:13:43.453 "uuid": "b282ea90-78f9-44b5-aefd-24417b45d73c", 00:13:43.453 "is_configured": true, 00:13:43.453 "data_offset": 2048, 00:13:43.453 "data_size": 63488 00:13:43.453 } 00:13:43.453 ] 00:13:43.453 }' 00:13:43.453 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.453 16:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.018 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:44.018 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:44.018 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.018 16:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.019 16:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.019 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:44.019 16:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.019 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:44.019 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:44.019 16:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:44.019 16:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.019 16:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.019 [2024-11-05 16:27:56.924737] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:44.019 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.019 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:44.019 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:44.019 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.019 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.019 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.019 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:44.019 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.019 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:44.019 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:44.019 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:44.019 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.019 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.019 [2024-11-05 16:27:57.101184] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:44.278 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.278 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:44.278 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:44.278 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.278 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.278 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.278 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:44.278 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.278 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:44.278 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:44.278 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:44.278 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.278 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.278 [2024-11-05 16:27:57.272752] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:44.278 [2024-11-05 16:27:57.272867] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:44.537 [2024-11-05 16:27:57.389117] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:44.537 [2024-11-05 16:27:57.389289] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:44.537 [2024-11-05 16:27:57.389312] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:44.537 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.537 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:44.537 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:44.537 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:44.537 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.537 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.537 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.537 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.538 BaseBdev2 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.538 [ 00:13:44.538 { 00:13:44.538 "name": "BaseBdev2", 00:13:44.538 "aliases": [ 00:13:44.538 "4d2b0999-03f0-453a-be91-4115acd7e47e" 00:13:44.538 ], 00:13:44.538 "product_name": "Malloc disk", 00:13:44.538 "block_size": 512, 00:13:44.538 "num_blocks": 65536, 00:13:44.538 "uuid": "4d2b0999-03f0-453a-be91-4115acd7e47e", 00:13:44.538 "assigned_rate_limits": { 00:13:44.538 "rw_ios_per_sec": 0, 00:13:44.538 "rw_mbytes_per_sec": 0, 00:13:44.538 "r_mbytes_per_sec": 0, 00:13:44.538 "w_mbytes_per_sec": 0 00:13:44.538 }, 00:13:44.538 "claimed": false, 00:13:44.538 "zoned": false, 00:13:44.538 "supported_io_types": { 00:13:44.538 "read": true, 00:13:44.538 "write": true, 00:13:44.538 "unmap": true, 00:13:44.538 "flush": true, 00:13:44.538 "reset": true, 00:13:44.538 "nvme_admin": false, 00:13:44.538 "nvme_io": false, 00:13:44.538 "nvme_io_md": false, 00:13:44.538 "write_zeroes": true, 00:13:44.538 "zcopy": true, 00:13:44.538 "get_zone_info": false, 00:13:44.538 "zone_management": false, 00:13:44.538 "zone_append": false, 00:13:44.538 "compare": false, 00:13:44.538 "compare_and_write": false, 00:13:44.538 "abort": true, 00:13:44.538 "seek_hole": false, 00:13:44.538 "seek_data": false, 00:13:44.538 "copy": true, 00:13:44.538 "nvme_iov_md": false 00:13:44.538 }, 00:13:44.538 "memory_domains": [ 00:13:44.538 { 00:13:44.538 "dma_device_id": "system", 00:13:44.538 "dma_device_type": 1 00:13:44.538 }, 00:13:44.538 { 00:13:44.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.538 "dma_device_type": 2 00:13:44.538 } 00:13:44.538 ], 00:13:44.538 "driver_specific": {} 00:13:44.538 } 00:13:44.538 ] 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.538 BaseBdev3 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.538 [ 00:13:44.538 { 00:13:44.538 "name": "BaseBdev3", 00:13:44.538 "aliases": [ 00:13:44.538 "93726d21-84af-4dc9-b9ab-2735d7bbd028" 00:13:44.538 ], 00:13:44.538 "product_name": "Malloc disk", 00:13:44.538 "block_size": 512, 00:13:44.538 "num_blocks": 65536, 00:13:44.538 "uuid": "93726d21-84af-4dc9-b9ab-2735d7bbd028", 00:13:44.538 "assigned_rate_limits": { 00:13:44.538 "rw_ios_per_sec": 0, 00:13:44.538 "rw_mbytes_per_sec": 0, 00:13:44.538 "r_mbytes_per_sec": 0, 00:13:44.538 "w_mbytes_per_sec": 0 00:13:44.538 }, 00:13:44.538 "claimed": false, 00:13:44.538 "zoned": false, 00:13:44.538 "supported_io_types": { 00:13:44.538 "read": true, 00:13:44.538 "write": true, 00:13:44.538 "unmap": true, 00:13:44.538 "flush": true, 00:13:44.538 "reset": true, 00:13:44.538 "nvme_admin": false, 00:13:44.538 "nvme_io": false, 00:13:44.538 "nvme_io_md": false, 00:13:44.538 "write_zeroes": true, 00:13:44.538 "zcopy": true, 00:13:44.538 "get_zone_info": false, 00:13:44.538 "zone_management": false, 00:13:44.538 "zone_append": false, 00:13:44.538 "compare": false, 00:13:44.538 "compare_and_write": false, 00:13:44.538 "abort": true, 00:13:44.538 "seek_hole": false, 00:13:44.538 "seek_data": false, 00:13:44.538 "copy": true, 00:13:44.538 "nvme_iov_md": false 00:13:44.538 }, 00:13:44.538 "memory_domains": [ 00:13:44.538 { 00:13:44.538 "dma_device_id": "system", 00:13:44.538 "dma_device_type": 1 00:13:44.538 }, 00:13:44.538 { 00:13:44.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.538 "dma_device_type": 2 00:13:44.538 } 00:13:44.538 ], 00:13:44.538 "driver_specific": {} 00:13:44.538 } 00:13:44.538 ] 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.538 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.798 BaseBdev4 00:13:44.798 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.798 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:44.798 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:13:44.798 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:44.798 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:44.798 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:44.798 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:44.798 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:44.798 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.798 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.798 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.798 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:44.798 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.798 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.798 [ 00:13:44.798 { 00:13:44.798 "name": "BaseBdev4", 00:13:44.798 "aliases": [ 00:13:44.798 "906df9eb-a0bd-404d-a846-4939b8d8d369" 00:13:44.798 ], 00:13:44.798 "product_name": "Malloc disk", 00:13:44.798 "block_size": 512, 00:13:44.798 "num_blocks": 65536, 00:13:44.798 "uuid": "906df9eb-a0bd-404d-a846-4939b8d8d369", 00:13:44.798 "assigned_rate_limits": { 00:13:44.798 "rw_ios_per_sec": 0, 00:13:44.798 "rw_mbytes_per_sec": 0, 00:13:44.798 "r_mbytes_per_sec": 0, 00:13:44.798 "w_mbytes_per_sec": 0 00:13:44.798 }, 00:13:44.798 "claimed": false, 00:13:44.798 "zoned": false, 00:13:44.798 "supported_io_types": { 00:13:44.798 "read": true, 00:13:44.798 "write": true, 00:13:44.798 "unmap": true, 00:13:44.798 "flush": true, 00:13:44.798 "reset": true, 00:13:44.798 "nvme_admin": false, 00:13:44.798 "nvme_io": false, 00:13:44.798 "nvme_io_md": false, 00:13:44.798 "write_zeroes": true, 00:13:44.798 "zcopy": true, 00:13:44.798 "get_zone_info": false, 00:13:44.798 "zone_management": false, 00:13:44.798 "zone_append": false, 00:13:44.798 "compare": false, 00:13:44.798 "compare_and_write": false, 00:13:44.798 "abort": true, 00:13:44.798 "seek_hole": false, 00:13:44.798 "seek_data": false, 00:13:44.798 "copy": true, 00:13:44.798 "nvme_iov_md": false 00:13:44.798 }, 00:13:44.798 "memory_domains": [ 00:13:44.798 { 00:13:44.798 "dma_device_id": "system", 00:13:44.798 "dma_device_type": 1 00:13:44.798 }, 00:13:44.798 { 00:13:44.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.798 "dma_device_type": 2 00:13:44.798 } 00:13:44.798 ], 00:13:44.798 "driver_specific": {} 00:13:44.798 } 00:13:44.798 ] 00:13:44.798 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.798 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:44.798 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:44.798 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:44.798 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:44.798 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.798 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.798 [2024-11-05 16:27:57.670101] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:44.798 [2024-11-05 16:27:57.670157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:44.798 [2024-11-05 16:27:57.670184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:44.798 [2024-11-05 16:27:57.672299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:44.798 [2024-11-05 16:27:57.672357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:44.798 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.798 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:44.798 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.798 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:44.798 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:44.798 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:44.798 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:44.798 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.798 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.798 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.798 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.798 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.798 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.798 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.798 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.798 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.799 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.799 "name": "Existed_Raid", 00:13:44.799 "uuid": "43063bba-29c2-4259-8404-07fdee752e57", 00:13:44.799 "strip_size_kb": 0, 00:13:44.799 "state": "configuring", 00:13:44.799 "raid_level": "raid1", 00:13:44.799 "superblock": true, 00:13:44.799 "num_base_bdevs": 4, 00:13:44.799 "num_base_bdevs_discovered": 3, 00:13:44.799 "num_base_bdevs_operational": 4, 00:13:44.799 "base_bdevs_list": [ 00:13:44.799 { 00:13:44.799 "name": "BaseBdev1", 00:13:44.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.799 "is_configured": false, 00:13:44.799 "data_offset": 0, 00:13:44.799 "data_size": 0 00:13:44.799 }, 00:13:44.799 { 00:13:44.799 "name": "BaseBdev2", 00:13:44.799 "uuid": "4d2b0999-03f0-453a-be91-4115acd7e47e", 00:13:44.799 "is_configured": true, 00:13:44.799 "data_offset": 2048, 00:13:44.799 "data_size": 63488 00:13:44.799 }, 00:13:44.799 { 00:13:44.799 "name": "BaseBdev3", 00:13:44.799 "uuid": "93726d21-84af-4dc9-b9ab-2735d7bbd028", 00:13:44.799 "is_configured": true, 00:13:44.799 "data_offset": 2048, 00:13:44.799 "data_size": 63488 00:13:44.799 }, 00:13:44.799 { 00:13:44.799 "name": "BaseBdev4", 00:13:44.799 "uuid": "906df9eb-a0bd-404d-a846-4939b8d8d369", 00:13:44.799 "is_configured": true, 00:13:44.799 "data_offset": 2048, 00:13:44.799 "data_size": 63488 00:13:44.799 } 00:13:44.799 ] 00:13:44.799 }' 00:13:44.799 16:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.799 16:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.057 16:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:45.057 16:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.057 16:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.057 [2024-11-05 16:27:58.129389] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:45.057 16:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.057 16:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:45.057 16:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:45.057 16:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:45.057 16:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.057 16:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.057 16:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:45.057 16:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.057 16:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.057 16:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.057 16:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.057 16:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.057 16:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.057 16:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.057 16:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.315 16:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.315 16:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.315 "name": "Existed_Raid", 00:13:45.315 "uuid": "43063bba-29c2-4259-8404-07fdee752e57", 00:13:45.315 "strip_size_kb": 0, 00:13:45.315 "state": "configuring", 00:13:45.315 "raid_level": "raid1", 00:13:45.315 "superblock": true, 00:13:45.315 "num_base_bdevs": 4, 00:13:45.315 "num_base_bdevs_discovered": 2, 00:13:45.315 "num_base_bdevs_operational": 4, 00:13:45.315 "base_bdevs_list": [ 00:13:45.315 { 00:13:45.315 "name": "BaseBdev1", 00:13:45.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.315 "is_configured": false, 00:13:45.315 "data_offset": 0, 00:13:45.315 "data_size": 0 00:13:45.315 }, 00:13:45.315 { 00:13:45.315 "name": null, 00:13:45.315 "uuid": "4d2b0999-03f0-453a-be91-4115acd7e47e", 00:13:45.315 "is_configured": false, 00:13:45.315 "data_offset": 0, 00:13:45.315 "data_size": 63488 00:13:45.315 }, 00:13:45.315 { 00:13:45.315 "name": "BaseBdev3", 00:13:45.315 "uuid": "93726d21-84af-4dc9-b9ab-2735d7bbd028", 00:13:45.315 "is_configured": true, 00:13:45.315 "data_offset": 2048, 00:13:45.315 "data_size": 63488 00:13:45.315 }, 00:13:45.315 { 00:13:45.315 "name": "BaseBdev4", 00:13:45.315 "uuid": "906df9eb-a0bd-404d-a846-4939b8d8d369", 00:13:45.315 "is_configured": true, 00:13:45.315 "data_offset": 2048, 00:13:45.315 "data_size": 63488 00:13:45.315 } 00:13:45.315 ] 00:13:45.315 }' 00:13:45.315 16:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.315 16:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.577 16:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:45.577 16:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.577 16:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.577 16:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.577 16:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.577 16:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:45.577 16:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:45.577 16:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.577 16:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.577 [2024-11-05 16:27:58.635355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:45.577 BaseBdev1 00:13:45.577 16:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.577 16:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:45.577 16:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:13:45.577 16:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:45.577 16:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:45.577 16:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:45.577 16:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:45.577 16:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:45.577 16:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.577 16:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.577 16:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.577 16:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:45.577 16:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.577 16:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.577 [ 00:13:45.577 { 00:13:45.577 "name": "BaseBdev1", 00:13:45.577 "aliases": [ 00:13:45.577 "a3bc82a4-7d4f-4b0d-8cd6-407709fc8c4a" 00:13:45.577 ], 00:13:45.577 "product_name": "Malloc disk", 00:13:45.577 "block_size": 512, 00:13:45.577 "num_blocks": 65536, 00:13:45.577 "uuid": "a3bc82a4-7d4f-4b0d-8cd6-407709fc8c4a", 00:13:45.577 "assigned_rate_limits": { 00:13:45.577 "rw_ios_per_sec": 0, 00:13:45.577 "rw_mbytes_per_sec": 0, 00:13:45.577 "r_mbytes_per_sec": 0, 00:13:45.577 "w_mbytes_per_sec": 0 00:13:45.577 }, 00:13:45.577 "claimed": true, 00:13:45.577 "claim_type": "exclusive_write", 00:13:45.577 "zoned": false, 00:13:45.577 "supported_io_types": { 00:13:45.577 "read": true, 00:13:45.577 "write": true, 00:13:45.577 "unmap": true, 00:13:45.577 "flush": true, 00:13:45.577 "reset": true, 00:13:45.577 "nvme_admin": false, 00:13:45.577 "nvme_io": false, 00:13:45.577 "nvme_io_md": false, 00:13:45.577 "write_zeroes": true, 00:13:45.577 "zcopy": true, 00:13:45.577 "get_zone_info": false, 00:13:45.577 "zone_management": false, 00:13:45.577 "zone_append": false, 00:13:45.577 "compare": false, 00:13:45.577 "compare_and_write": false, 00:13:45.577 "abort": true, 00:13:45.577 "seek_hole": false, 00:13:45.577 "seek_data": false, 00:13:45.577 "copy": true, 00:13:45.577 "nvme_iov_md": false 00:13:45.577 }, 00:13:45.577 "memory_domains": [ 00:13:45.577 { 00:13:45.577 "dma_device_id": "system", 00:13:45.577 "dma_device_type": 1 00:13:45.577 }, 00:13:45.577 { 00:13:45.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.577 "dma_device_type": 2 00:13:45.577 } 00:13:45.577 ], 00:13:45.577 "driver_specific": {} 00:13:45.577 } 00:13:45.577 ] 00:13:45.577 16:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.577 16:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:45.577 16:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:45.577 16:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:45.577 16:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:45.577 16:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.577 16:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.577 16:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:45.577 16:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.577 16:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.577 16:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.577 16:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.577 16:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.577 16:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.577 16:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.577 16:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.848 16:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.848 16:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.848 "name": "Existed_Raid", 00:13:45.848 "uuid": "43063bba-29c2-4259-8404-07fdee752e57", 00:13:45.848 "strip_size_kb": 0, 00:13:45.848 "state": "configuring", 00:13:45.848 "raid_level": "raid1", 00:13:45.848 "superblock": true, 00:13:45.848 "num_base_bdevs": 4, 00:13:45.848 "num_base_bdevs_discovered": 3, 00:13:45.848 "num_base_bdevs_operational": 4, 00:13:45.848 "base_bdevs_list": [ 00:13:45.848 { 00:13:45.848 "name": "BaseBdev1", 00:13:45.848 "uuid": "a3bc82a4-7d4f-4b0d-8cd6-407709fc8c4a", 00:13:45.848 "is_configured": true, 00:13:45.848 "data_offset": 2048, 00:13:45.848 "data_size": 63488 00:13:45.848 }, 00:13:45.848 { 00:13:45.848 "name": null, 00:13:45.848 "uuid": "4d2b0999-03f0-453a-be91-4115acd7e47e", 00:13:45.848 "is_configured": false, 00:13:45.848 "data_offset": 0, 00:13:45.848 "data_size": 63488 00:13:45.848 }, 00:13:45.848 { 00:13:45.848 "name": "BaseBdev3", 00:13:45.848 "uuid": "93726d21-84af-4dc9-b9ab-2735d7bbd028", 00:13:45.848 "is_configured": true, 00:13:45.848 "data_offset": 2048, 00:13:45.848 "data_size": 63488 00:13:45.848 }, 00:13:45.848 { 00:13:45.848 "name": "BaseBdev4", 00:13:45.848 "uuid": "906df9eb-a0bd-404d-a846-4939b8d8d369", 00:13:45.848 "is_configured": true, 00:13:45.848 "data_offset": 2048, 00:13:45.848 "data_size": 63488 00:13:45.848 } 00:13:45.848 ] 00:13:45.848 }' 00:13:45.848 16:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.848 16:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.106 16:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.106 16:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.106 16:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.106 16:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:46.106 16:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.106 16:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:46.106 16:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:46.106 16:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.106 16:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.364 [2024-11-05 16:27:59.198576] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:46.364 16:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.364 16:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:46.364 16:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:46.364 16:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:46.364 16:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.364 16:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.364 16:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:46.364 16:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.364 16:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.364 16:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.364 16:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.364 16:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.364 16:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.364 16:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.364 16:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.364 16:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.364 16:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.364 "name": "Existed_Raid", 00:13:46.364 "uuid": "43063bba-29c2-4259-8404-07fdee752e57", 00:13:46.364 "strip_size_kb": 0, 00:13:46.364 "state": "configuring", 00:13:46.364 "raid_level": "raid1", 00:13:46.364 "superblock": true, 00:13:46.364 "num_base_bdevs": 4, 00:13:46.364 "num_base_bdevs_discovered": 2, 00:13:46.364 "num_base_bdevs_operational": 4, 00:13:46.364 "base_bdevs_list": [ 00:13:46.364 { 00:13:46.364 "name": "BaseBdev1", 00:13:46.364 "uuid": "a3bc82a4-7d4f-4b0d-8cd6-407709fc8c4a", 00:13:46.364 "is_configured": true, 00:13:46.364 "data_offset": 2048, 00:13:46.364 "data_size": 63488 00:13:46.364 }, 00:13:46.364 { 00:13:46.364 "name": null, 00:13:46.364 "uuid": "4d2b0999-03f0-453a-be91-4115acd7e47e", 00:13:46.364 "is_configured": false, 00:13:46.364 "data_offset": 0, 00:13:46.364 "data_size": 63488 00:13:46.364 }, 00:13:46.364 { 00:13:46.364 "name": null, 00:13:46.364 "uuid": "93726d21-84af-4dc9-b9ab-2735d7bbd028", 00:13:46.364 "is_configured": false, 00:13:46.364 "data_offset": 0, 00:13:46.364 "data_size": 63488 00:13:46.364 }, 00:13:46.364 { 00:13:46.364 "name": "BaseBdev4", 00:13:46.364 "uuid": "906df9eb-a0bd-404d-a846-4939b8d8d369", 00:13:46.364 "is_configured": true, 00:13:46.364 "data_offset": 2048, 00:13:46.364 "data_size": 63488 00:13:46.364 } 00:13:46.364 ] 00:13:46.364 }' 00:13:46.364 16:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.364 16:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.622 16:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:46.622 16:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.622 16:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.622 16:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.622 16:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.622 16:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:46.622 16:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:46.622 16:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.622 16:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.622 [2024-11-05 16:27:59.693687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:46.622 16:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.623 16:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:46.623 16:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:46.623 16:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:46.623 16:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.623 16:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.623 16:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:46.623 16:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.623 16:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.623 16:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.623 16:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.623 16:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.623 16:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.623 16:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.623 16:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.881 16:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.881 16:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.881 "name": "Existed_Raid", 00:13:46.881 "uuid": "43063bba-29c2-4259-8404-07fdee752e57", 00:13:46.881 "strip_size_kb": 0, 00:13:46.881 "state": "configuring", 00:13:46.881 "raid_level": "raid1", 00:13:46.881 "superblock": true, 00:13:46.881 "num_base_bdevs": 4, 00:13:46.881 "num_base_bdevs_discovered": 3, 00:13:46.881 "num_base_bdevs_operational": 4, 00:13:46.881 "base_bdevs_list": [ 00:13:46.881 { 00:13:46.881 "name": "BaseBdev1", 00:13:46.881 "uuid": "a3bc82a4-7d4f-4b0d-8cd6-407709fc8c4a", 00:13:46.881 "is_configured": true, 00:13:46.881 "data_offset": 2048, 00:13:46.881 "data_size": 63488 00:13:46.881 }, 00:13:46.881 { 00:13:46.881 "name": null, 00:13:46.881 "uuid": "4d2b0999-03f0-453a-be91-4115acd7e47e", 00:13:46.881 "is_configured": false, 00:13:46.881 "data_offset": 0, 00:13:46.881 "data_size": 63488 00:13:46.881 }, 00:13:46.881 { 00:13:46.881 "name": "BaseBdev3", 00:13:46.881 "uuid": "93726d21-84af-4dc9-b9ab-2735d7bbd028", 00:13:46.881 "is_configured": true, 00:13:46.881 "data_offset": 2048, 00:13:46.881 "data_size": 63488 00:13:46.881 }, 00:13:46.881 { 00:13:46.881 "name": "BaseBdev4", 00:13:46.881 "uuid": "906df9eb-a0bd-404d-a846-4939b8d8d369", 00:13:46.881 "is_configured": true, 00:13:46.881 "data_offset": 2048, 00:13:46.881 "data_size": 63488 00:13:46.881 } 00:13:46.881 ] 00:13:46.881 }' 00:13:46.881 16:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.881 16:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.139 16:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.139 16:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:47.139 16:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.139 16:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.139 16:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.139 16:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:47.139 16:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:47.139 16:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.139 16:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.139 [2024-11-05 16:28:00.200938] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:47.398 16:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.398 16:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:47.398 16:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:47.398 16:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.398 16:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:47.398 16:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:47.398 16:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:47.398 16:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.398 16:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.398 16:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.398 16:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.398 16:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.398 16:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.398 16:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.398 16:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.398 16:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.398 16:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.398 "name": "Existed_Raid", 00:13:47.398 "uuid": "43063bba-29c2-4259-8404-07fdee752e57", 00:13:47.398 "strip_size_kb": 0, 00:13:47.398 "state": "configuring", 00:13:47.398 "raid_level": "raid1", 00:13:47.398 "superblock": true, 00:13:47.398 "num_base_bdevs": 4, 00:13:47.398 "num_base_bdevs_discovered": 2, 00:13:47.398 "num_base_bdevs_operational": 4, 00:13:47.398 "base_bdevs_list": [ 00:13:47.398 { 00:13:47.398 "name": null, 00:13:47.398 "uuid": "a3bc82a4-7d4f-4b0d-8cd6-407709fc8c4a", 00:13:47.398 "is_configured": false, 00:13:47.398 "data_offset": 0, 00:13:47.398 "data_size": 63488 00:13:47.398 }, 00:13:47.398 { 00:13:47.398 "name": null, 00:13:47.398 "uuid": "4d2b0999-03f0-453a-be91-4115acd7e47e", 00:13:47.398 "is_configured": false, 00:13:47.398 "data_offset": 0, 00:13:47.398 "data_size": 63488 00:13:47.398 }, 00:13:47.398 { 00:13:47.398 "name": "BaseBdev3", 00:13:47.398 "uuid": "93726d21-84af-4dc9-b9ab-2735d7bbd028", 00:13:47.398 "is_configured": true, 00:13:47.398 "data_offset": 2048, 00:13:47.398 "data_size": 63488 00:13:47.398 }, 00:13:47.398 { 00:13:47.398 "name": "BaseBdev4", 00:13:47.398 "uuid": "906df9eb-a0bd-404d-a846-4939b8d8d369", 00:13:47.398 "is_configured": true, 00:13:47.398 "data_offset": 2048, 00:13:47.398 "data_size": 63488 00:13:47.398 } 00:13:47.398 ] 00:13:47.398 }' 00:13:47.398 16:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.398 16:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.965 16:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.965 16:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:47.965 16:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.965 16:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.965 16:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.965 16:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:47.965 16:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:47.965 16:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.965 16:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.965 [2024-11-05 16:28:00.807235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:47.965 16:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.965 16:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:47.965 16:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:47.965 16:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.965 16:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:47.965 16:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:47.965 16:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:47.965 16:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.965 16:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.965 16:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.965 16:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.965 16:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.965 16:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.965 16:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.965 16:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.965 16:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.965 16:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.965 "name": "Existed_Raid", 00:13:47.965 "uuid": "43063bba-29c2-4259-8404-07fdee752e57", 00:13:47.965 "strip_size_kb": 0, 00:13:47.965 "state": "configuring", 00:13:47.965 "raid_level": "raid1", 00:13:47.965 "superblock": true, 00:13:47.965 "num_base_bdevs": 4, 00:13:47.965 "num_base_bdevs_discovered": 3, 00:13:47.965 "num_base_bdevs_operational": 4, 00:13:47.965 "base_bdevs_list": [ 00:13:47.965 { 00:13:47.965 "name": null, 00:13:47.965 "uuid": "a3bc82a4-7d4f-4b0d-8cd6-407709fc8c4a", 00:13:47.965 "is_configured": false, 00:13:47.965 "data_offset": 0, 00:13:47.965 "data_size": 63488 00:13:47.965 }, 00:13:47.965 { 00:13:47.965 "name": "BaseBdev2", 00:13:47.965 "uuid": "4d2b0999-03f0-453a-be91-4115acd7e47e", 00:13:47.965 "is_configured": true, 00:13:47.965 "data_offset": 2048, 00:13:47.965 "data_size": 63488 00:13:47.965 }, 00:13:47.965 { 00:13:47.965 "name": "BaseBdev3", 00:13:47.965 "uuid": "93726d21-84af-4dc9-b9ab-2735d7bbd028", 00:13:47.965 "is_configured": true, 00:13:47.965 "data_offset": 2048, 00:13:47.965 "data_size": 63488 00:13:47.965 }, 00:13:47.965 { 00:13:47.965 "name": "BaseBdev4", 00:13:47.965 "uuid": "906df9eb-a0bd-404d-a846-4939b8d8d369", 00:13:47.965 "is_configured": true, 00:13:47.965 "data_offset": 2048, 00:13:47.965 "data_size": 63488 00:13:47.965 } 00:13:47.965 ] 00:13:47.965 }' 00:13:47.965 16:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.965 16:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.223 16:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.223 16:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.223 16:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:48.223 16:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.223 16:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.480 16:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:48.480 16:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:48.480 16:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.480 16:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.480 16:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.480 16:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.480 16:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a3bc82a4-7d4f-4b0d-8cd6-407709fc8c4a 00:13:48.480 16:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.480 16:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.480 [2024-11-05 16:28:01.426121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:48.480 [2024-11-05 16:28:01.426411] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:48.480 [2024-11-05 16:28:01.426435] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:48.480 [2024-11-05 16:28:01.426770] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:48.480 [2024-11-05 16:28:01.426958] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:48.480 [2024-11-05 16:28:01.426971] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:48.480 NewBaseBdev 00:13:48.480 [2024-11-05 16:28:01.427128] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.480 16:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.480 16:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:48.480 16:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:13:48.480 16:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:48.480 16:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:48.480 16:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:48.480 16:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:48.480 16:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:48.480 16:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.480 16:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.480 16:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.480 16:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:48.480 16:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.481 16:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.481 [ 00:13:48.481 { 00:13:48.481 "name": "NewBaseBdev", 00:13:48.481 "aliases": [ 00:13:48.481 "a3bc82a4-7d4f-4b0d-8cd6-407709fc8c4a" 00:13:48.481 ], 00:13:48.481 "product_name": "Malloc disk", 00:13:48.481 "block_size": 512, 00:13:48.481 "num_blocks": 65536, 00:13:48.481 "uuid": "a3bc82a4-7d4f-4b0d-8cd6-407709fc8c4a", 00:13:48.481 "assigned_rate_limits": { 00:13:48.481 "rw_ios_per_sec": 0, 00:13:48.481 "rw_mbytes_per_sec": 0, 00:13:48.481 "r_mbytes_per_sec": 0, 00:13:48.481 "w_mbytes_per_sec": 0 00:13:48.481 }, 00:13:48.481 "claimed": true, 00:13:48.481 "claim_type": "exclusive_write", 00:13:48.481 "zoned": false, 00:13:48.481 "supported_io_types": { 00:13:48.481 "read": true, 00:13:48.481 "write": true, 00:13:48.481 "unmap": true, 00:13:48.481 "flush": true, 00:13:48.481 "reset": true, 00:13:48.481 "nvme_admin": false, 00:13:48.481 "nvme_io": false, 00:13:48.481 "nvme_io_md": false, 00:13:48.481 "write_zeroes": true, 00:13:48.481 "zcopy": true, 00:13:48.481 "get_zone_info": false, 00:13:48.481 "zone_management": false, 00:13:48.481 "zone_append": false, 00:13:48.481 "compare": false, 00:13:48.481 "compare_and_write": false, 00:13:48.481 "abort": true, 00:13:48.481 "seek_hole": false, 00:13:48.481 "seek_data": false, 00:13:48.481 "copy": true, 00:13:48.481 "nvme_iov_md": false 00:13:48.481 }, 00:13:48.481 "memory_domains": [ 00:13:48.481 { 00:13:48.481 "dma_device_id": "system", 00:13:48.481 "dma_device_type": 1 00:13:48.481 }, 00:13:48.481 { 00:13:48.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.481 "dma_device_type": 2 00:13:48.481 } 00:13:48.481 ], 00:13:48.481 "driver_specific": {} 00:13:48.481 } 00:13:48.481 ] 00:13:48.481 16:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.481 16:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:48.481 16:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:48.481 16:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:48.481 16:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.481 16:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.481 16:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.481 16:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:48.481 16:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.481 16:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.481 16:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.481 16:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.481 16:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.481 16:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.481 16:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.481 16:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.481 16:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.481 16:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.481 "name": "Existed_Raid", 00:13:48.481 "uuid": "43063bba-29c2-4259-8404-07fdee752e57", 00:13:48.481 "strip_size_kb": 0, 00:13:48.481 "state": "online", 00:13:48.481 "raid_level": "raid1", 00:13:48.481 "superblock": true, 00:13:48.481 "num_base_bdevs": 4, 00:13:48.481 "num_base_bdevs_discovered": 4, 00:13:48.481 "num_base_bdevs_operational": 4, 00:13:48.481 "base_bdevs_list": [ 00:13:48.481 { 00:13:48.481 "name": "NewBaseBdev", 00:13:48.481 "uuid": "a3bc82a4-7d4f-4b0d-8cd6-407709fc8c4a", 00:13:48.481 "is_configured": true, 00:13:48.481 "data_offset": 2048, 00:13:48.481 "data_size": 63488 00:13:48.481 }, 00:13:48.481 { 00:13:48.481 "name": "BaseBdev2", 00:13:48.481 "uuid": "4d2b0999-03f0-453a-be91-4115acd7e47e", 00:13:48.481 "is_configured": true, 00:13:48.481 "data_offset": 2048, 00:13:48.481 "data_size": 63488 00:13:48.481 }, 00:13:48.481 { 00:13:48.481 "name": "BaseBdev3", 00:13:48.481 "uuid": "93726d21-84af-4dc9-b9ab-2735d7bbd028", 00:13:48.481 "is_configured": true, 00:13:48.481 "data_offset": 2048, 00:13:48.481 "data_size": 63488 00:13:48.481 }, 00:13:48.481 { 00:13:48.481 "name": "BaseBdev4", 00:13:48.481 "uuid": "906df9eb-a0bd-404d-a846-4939b8d8d369", 00:13:48.481 "is_configured": true, 00:13:48.481 "data_offset": 2048, 00:13:48.481 "data_size": 63488 00:13:48.481 } 00:13:48.481 ] 00:13:48.481 }' 00:13:48.481 16:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.481 16:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.046 16:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:49.046 16:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:49.046 16:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:49.046 16:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:49.046 16:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:49.046 16:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:49.046 16:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:49.046 16:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:49.046 16:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.046 16:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.046 [2024-11-05 16:28:01.909855] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:49.046 16:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.046 16:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:49.046 "name": "Existed_Raid", 00:13:49.046 "aliases": [ 00:13:49.046 "43063bba-29c2-4259-8404-07fdee752e57" 00:13:49.046 ], 00:13:49.046 "product_name": "Raid Volume", 00:13:49.046 "block_size": 512, 00:13:49.046 "num_blocks": 63488, 00:13:49.046 "uuid": "43063bba-29c2-4259-8404-07fdee752e57", 00:13:49.046 "assigned_rate_limits": { 00:13:49.046 "rw_ios_per_sec": 0, 00:13:49.046 "rw_mbytes_per_sec": 0, 00:13:49.046 "r_mbytes_per_sec": 0, 00:13:49.046 "w_mbytes_per_sec": 0 00:13:49.046 }, 00:13:49.046 "claimed": false, 00:13:49.046 "zoned": false, 00:13:49.046 "supported_io_types": { 00:13:49.046 "read": true, 00:13:49.046 "write": true, 00:13:49.046 "unmap": false, 00:13:49.046 "flush": false, 00:13:49.046 "reset": true, 00:13:49.046 "nvme_admin": false, 00:13:49.046 "nvme_io": false, 00:13:49.046 "nvme_io_md": false, 00:13:49.046 "write_zeroes": true, 00:13:49.046 "zcopy": false, 00:13:49.046 "get_zone_info": false, 00:13:49.046 "zone_management": false, 00:13:49.046 "zone_append": false, 00:13:49.046 "compare": false, 00:13:49.046 "compare_and_write": false, 00:13:49.046 "abort": false, 00:13:49.046 "seek_hole": false, 00:13:49.046 "seek_data": false, 00:13:49.046 "copy": false, 00:13:49.046 "nvme_iov_md": false 00:13:49.046 }, 00:13:49.046 "memory_domains": [ 00:13:49.046 { 00:13:49.046 "dma_device_id": "system", 00:13:49.046 "dma_device_type": 1 00:13:49.046 }, 00:13:49.046 { 00:13:49.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.046 "dma_device_type": 2 00:13:49.046 }, 00:13:49.046 { 00:13:49.046 "dma_device_id": "system", 00:13:49.046 "dma_device_type": 1 00:13:49.046 }, 00:13:49.046 { 00:13:49.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.046 "dma_device_type": 2 00:13:49.046 }, 00:13:49.046 { 00:13:49.046 "dma_device_id": "system", 00:13:49.046 "dma_device_type": 1 00:13:49.046 }, 00:13:49.046 { 00:13:49.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.047 "dma_device_type": 2 00:13:49.047 }, 00:13:49.047 { 00:13:49.047 "dma_device_id": "system", 00:13:49.047 "dma_device_type": 1 00:13:49.047 }, 00:13:49.047 { 00:13:49.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.047 "dma_device_type": 2 00:13:49.047 } 00:13:49.047 ], 00:13:49.047 "driver_specific": { 00:13:49.047 "raid": { 00:13:49.047 "uuid": "43063bba-29c2-4259-8404-07fdee752e57", 00:13:49.047 "strip_size_kb": 0, 00:13:49.047 "state": "online", 00:13:49.047 "raid_level": "raid1", 00:13:49.047 "superblock": true, 00:13:49.047 "num_base_bdevs": 4, 00:13:49.047 "num_base_bdevs_discovered": 4, 00:13:49.047 "num_base_bdevs_operational": 4, 00:13:49.047 "base_bdevs_list": [ 00:13:49.047 { 00:13:49.047 "name": "NewBaseBdev", 00:13:49.047 "uuid": "a3bc82a4-7d4f-4b0d-8cd6-407709fc8c4a", 00:13:49.047 "is_configured": true, 00:13:49.047 "data_offset": 2048, 00:13:49.047 "data_size": 63488 00:13:49.047 }, 00:13:49.047 { 00:13:49.047 "name": "BaseBdev2", 00:13:49.047 "uuid": "4d2b0999-03f0-453a-be91-4115acd7e47e", 00:13:49.047 "is_configured": true, 00:13:49.047 "data_offset": 2048, 00:13:49.047 "data_size": 63488 00:13:49.047 }, 00:13:49.047 { 00:13:49.047 "name": "BaseBdev3", 00:13:49.047 "uuid": "93726d21-84af-4dc9-b9ab-2735d7bbd028", 00:13:49.047 "is_configured": true, 00:13:49.047 "data_offset": 2048, 00:13:49.047 "data_size": 63488 00:13:49.047 }, 00:13:49.047 { 00:13:49.047 "name": "BaseBdev4", 00:13:49.047 "uuid": "906df9eb-a0bd-404d-a846-4939b8d8d369", 00:13:49.047 "is_configured": true, 00:13:49.047 "data_offset": 2048, 00:13:49.047 "data_size": 63488 00:13:49.047 } 00:13:49.047 ] 00:13:49.047 } 00:13:49.047 } 00:13:49.047 }' 00:13:49.047 16:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:49.047 16:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:49.047 BaseBdev2 00:13:49.047 BaseBdev3 00:13:49.047 BaseBdev4' 00:13:49.047 16:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.047 16:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:49.047 16:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:49.047 16:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:49.047 16:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.047 16:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.047 16:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.047 16:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.047 16:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:49.047 16:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:49.047 16:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:49.047 16:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:49.047 16:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.047 16:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.047 16:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.047 16:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.047 16:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:49.047 16:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:49.047 16:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:49.047 16:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.047 16:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:49.047 16:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.047 16:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.305 16:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.305 16:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:49.305 16:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:49.305 16:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:49.306 16:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.306 16:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:49.306 16:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.306 16:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.306 16:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.306 16:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:49.306 16:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:49.306 16:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:49.306 16:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.306 16:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.306 [2024-11-05 16:28:02.216895] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:49.306 [2024-11-05 16:28:02.216971] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:49.306 [2024-11-05 16:28:02.217096] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:49.306 [2024-11-05 16:28:02.217457] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:49.306 [2024-11-05 16:28:02.217539] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:49.306 16:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.306 16:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74184 00:13:49.306 16:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 74184 ']' 00:13:49.306 16:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 74184 00:13:49.306 16:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:13:49.306 16:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:49.306 16:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74184 00:13:49.306 16:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:49.306 16:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:49.306 16:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74184' 00:13:49.306 killing process with pid 74184 00:13:49.306 16:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 74184 00:13:49.306 [2024-11-05 16:28:02.259807] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:49.306 16:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 74184 00:13:49.871 [2024-11-05 16:28:02.744732] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:51.277 ************************************ 00:13:51.277 END TEST raid_state_function_test_sb 00:13:51.277 ************************************ 00:13:51.277 16:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:51.277 00:13:51.277 real 0m12.032s 00:13:51.277 user 0m19.085s 00:13:51.277 sys 0m1.694s 00:13:51.277 16:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:51.277 16:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.277 16:28:04 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:13:51.278 16:28:04 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:51.278 16:28:04 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:51.278 16:28:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:51.278 ************************************ 00:13:51.278 START TEST raid_superblock_test 00:13:51.278 ************************************ 00:13:51.278 16:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 4 00:13:51.278 16:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:13:51.278 16:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:13:51.278 16:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:51.278 16:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:51.278 16:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:51.278 16:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:51.278 16:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:51.278 16:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:51.278 16:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:51.278 16:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:51.278 16:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:51.278 16:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:51.278 16:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:51.278 16:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:13:51.278 16:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:13:51.278 16:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74861 00:13:51.278 16:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:51.278 16:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74861 00:13:51.278 16:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 74861 ']' 00:13:51.278 16:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.278 16:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:51.278 16:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.278 16:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:51.278 16:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.278 [2024-11-05 16:28:04.221533] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:13:51.278 [2024-11-05 16:28:04.221769] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74861 ] 00:13:51.536 [2024-11-05 16:28:04.390972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.536 [2024-11-05 16:28:04.531930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.794 [2024-11-05 16:28:04.783489] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:51.794 [2024-11-05 16:28:04.783675] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.052 16:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:52.052 16:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:13:52.053 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:52.053 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:52.053 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:52.053 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:52.053 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:52.053 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:52.053 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:52.053 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:52.053 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:52.053 16:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.053 16:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.311 malloc1 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.311 [2024-11-05 16:28:05.191233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:52.311 [2024-11-05 16:28:05.191352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.311 [2024-11-05 16:28:05.191414] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:52.311 [2024-11-05 16:28:05.191450] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.311 [2024-11-05 16:28:05.193919] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.311 [2024-11-05 16:28:05.194001] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:52.311 pt1 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.311 malloc2 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.311 [2024-11-05 16:28:05.253574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:52.311 [2024-11-05 16:28:05.253680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.311 [2024-11-05 16:28:05.253735] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:52.311 [2024-11-05 16:28:05.253778] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.311 [2024-11-05 16:28:05.256228] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.311 [2024-11-05 16:28:05.256303] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:52.311 pt2 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.311 malloc3 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.311 [2024-11-05 16:28:05.331323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:52.311 [2024-11-05 16:28:05.331431] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.311 [2024-11-05 16:28:05.331483] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:52.311 [2024-11-05 16:28:05.331544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.311 [2024-11-05 16:28:05.333985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.311 [2024-11-05 16:28:05.334065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:52.311 pt3 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:52.311 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:13:52.312 16:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.312 16:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.312 malloc4 00:13:52.312 16:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.312 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:52.312 16:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.312 16:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.312 [2024-11-05 16:28:05.398729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:52.312 [2024-11-05 16:28:05.398837] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.312 [2024-11-05 16:28:05.398879] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:52.312 [2024-11-05 16:28:05.398919] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.570 [2024-11-05 16:28:05.401330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.570 [2024-11-05 16:28:05.401410] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:52.570 pt4 00:13:52.570 16:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.570 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:52.570 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:52.570 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:13:52.570 16:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.570 16:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.570 [2024-11-05 16:28:05.410739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:52.570 [2024-11-05 16:28:05.412860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:52.570 [2024-11-05 16:28:05.412978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:52.570 [2024-11-05 16:28:05.413065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:52.570 [2024-11-05 16:28:05.413319] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:52.570 [2024-11-05 16:28:05.413377] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:52.570 [2024-11-05 16:28:05.413745] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:52.570 [2024-11-05 16:28:05.413981] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:52.570 [2024-11-05 16:28:05.414038] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:52.571 [2024-11-05 16:28:05.414252] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:52.571 16:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.571 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:52.571 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.571 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.571 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.571 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.571 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:52.571 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.571 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.571 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.571 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.571 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.571 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.571 16:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.571 16:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.571 16:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.571 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.571 "name": "raid_bdev1", 00:13:52.571 "uuid": "fb4b6d6a-3ac1-4450-9954-6931604a2ac2", 00:13:52.571 "strip_size_kb": 0, 00:13:52.571 "state": "online", 00:13:52.571 "raid_level": "raid1", 00:13:52.571 "superblock": true, 00:13:52.571 "num_base_bdevs": 4, 00:13:52.571 "num_base_bdevs_discovered": 4, 00:13:52.571 "num_base_bdevs_operational": 4, 00:13:52.571 "base_bdevs_list": [ 00:13:52.571 { 00:13:52.571 "name": "pt1", 00:13:52.571 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:52.571 "is_configured": true, 00:13:52.571 "data_offset": 2048, 00:13:52.571 "data_size": 63488 00:13:52.571 }, 00:13:52.571 { 00:13:52.571 "name": "pt2", 00:13:52.571 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:52.571 "is_configured": true, 00:13:52.571 "data_offset": 2048, 00:13:52.571 "data_size": 63488 00:13:52.571 }, 00:13:52.571 { 00:13:52.571 "name": "pt3", 00:13:52.571 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:52.571 "is_configured": true, 00:13:52.571 "data_offset": 2048, 00:13:52.571 "data_size": 63488 00:13:52.571 }, 00:13:52.571 { 00:13:52.571 "name": "pt4", 00:13:52.571 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:52.571 "is_configured": true, 00:13:52.571 "data_offset": 2048, 00:13:52.571 "data_size": 63488 00:13:52.571 } 00:13:52.571 ] 00:13:52.571 }' 00:13:52.571 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.571 16:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.830 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:52.830 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:52.830 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:52.830 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:52.830 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:52.830 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:52.830 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:53.089 16:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.089 16:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.089 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:53.089 [2024-11-05 16:28:05.926308] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:53.089 16:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.089 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:53.089 "name": "raid_bdev1", 00:13:53.089 "aliases": [ 00:13:53.089 "fb4b6d6a-3ac1-4450-9954-6931604a2ac2" 00:13:53.089 ], 00:13:53.089 "product_name": "Raid Volume", 00:13:53.089 "block_size": 512, 00:13:53.089 "num_blocks": 63488, 00:13:53.089 "uuid": "fb4b6d6a-3ac1-4450-9954-6931604a2ac2", 00:13:53.089 "assigned_rate_limits": { 00:13:53.089 "rw_ios_per_sec": 0, 00:13:53.089 "rw_mbytes_per_sec": 0, 00:13:53.089 "r_mbytes_per_sec": 0, 00:13:53.089 "w_mbytes_per_sec": 0 00:13:53.089 }, 00:13:53.089 "claimed": false, 00:13:53.089 "zoned": false, 00:13:53.089 "supported_io_types": { 00:13:53.089 "read": true, 00:13:53.089 "write": true, 00:13:53.089 "unmap": false, 00:13:53.089 "flush": false, 00:13:53.089 "reset": true, 00:13:53.089 "nvme_admin": false, 00:13:53.089 "nvme_io": false, 00:13:53.089 "nvme_io_md": false, 00:13:53.089 "write_zeroes": true, 00:13:53.089 "zcopy": false, 00:13:53.089 "get_zone_info": false, 00:13:53.089 "zone_management": false, 00:13:53.089 "zone_append": false, 00:13:53.089 "compare": false, 00:13:53.089 "compare_and_write": false, 00:13:53.089 "abort": false, 00:13:53.089 "seek_hole": false, 00:13:53.089 "seek_data": false, 00:13:53.089 "copy": false, 00:13:53.089 "nvme_iov_md": false 00:13:53.089 }, 00:13:53.089 "memory_domains": [ 00:13:53.089 { 00:13:53.089 "dma_device_id": "system", 00:13:53.089 "dma_device_type": 1 00:13:53.089 }, 00:13:53.089 { 00:13:53.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.089 "dma_device_type": 2 00:13:53.089 }, 00:13:53.089 { 00:13:53.089 "dma_device_id": "system", 00:13:53.089 "dma_device_type": 1 00:13:53.089 }, 00:13:53.089 { 00:13:53.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.089 "dma_device_type": 2 00:13:53.089 }, 00:13:53.089 { 00:13:53.089 "dma_device_id": "system", 00:13:53.089 "dma_device_type": 1 00:13:53.089 }, 00:13:53.089 { 00:13:53.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.089 "dma_device_type": 2 00:13:53.089 }, 00:13:53.089 { 00:13:53.089 "dma_device_id": "system", 00:13:53.089 "dma_device_type": 1 00:13:53.089 }, 00:13:53.089 { 00:13:53.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.089 "dma_device_type": 2 00:13:53.089 } 00:13:53.089 ], 00:13:53.089 "driver_specific": { 00:13:53.089 "raid": { 00:13:53.089 "uuid": "fb4b6d6a-3ac1-4450-9954-6931604a2ac2", 00:13:53.089 "strip_size_kb": 0, 00:13:53.089 "state": "online", 00:13:53.089 "raid_level": "raid1", 00:13:53.089 "superblock": true, 00:13:53.089 "num_base_bdevs": 4, 00:13:53.089 "num_base_bdevs_discovered": 4, 00:13:53.089 "num_base_bdevs_operational": 4, 00:13:53.089 "base_bdevs_list": [ 00:13:53.089 { 00:13:53.089 "name": "pt1", 00:13:53.089 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:53.089 "is_configured": true, 00:13:53.089 "data_offset": 2048, 00:13:53.089 "data_size": 63488 00:13:53.089 }, 00:13:53.089 { 00:13:53.089 "name": "pt2", 00:13:53.089 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:53.089 "is_configured": true, 00:13:53.089 "data_offset": 2048, 00:13:53.089 "data_size": 63488 00:13:53.089 }, 00:13:53.089 { 00:13:53.089 "name": "pt3", 00:13:53.089 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:53.089 "is_configured": true, 00:13:53.089 "data_offset": 2048, 00:13:53.089 "data_size": 63488 00:13:53.089 }, 00:13:53.089 { 00:13:53.089 "name": "pt4", 00:13:53.089 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:53.089 "is_configured": true, 00:13:53.089 "data_offset": 2048, 00:13:53.089 "data_size": 63488 00:13:53.089 } 00:13:53.089 ] 00:13:53.089 } 00:13:53.089 } 00:13:53.089 }' 00:13:53.089 16:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:53.089 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:53.089 pt2 00:13:53.089 pt3 00:13:53.089 pt4' 00:13:53.089 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.089 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:53.089 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.089 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.089 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:53.089 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.089 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.089 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.089 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.089 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.089 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.089 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.089 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:53.089 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.089 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.089 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.089 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.089 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.089 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.089 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.089 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:53.089 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.089 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.089 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.348 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.348 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.348 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.348 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:53.348 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.348 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.348 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.348 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.348 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:53.349 [2024-11-05 16:28:06.249753] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=fb4b6d6a-3ac1-4450-9954-6931604a2ac2 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z fb4b6d6a-3ac1-4450-9954-6931604a2ac2 ']' 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.349 [2024-11-05 16:28:06.297292] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:53.349 [2024-11-05 16:28:06.297363] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:53.349 [2024-11-05 16:28:06.297464] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:53.349 [2024-11-05 16:28:06.297577] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:53.349 [2024-11-05 16:28:06.297595] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.349 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.608 [2024-11-05 16:28:06.441087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:53.608 [2024-11-05 16:28:06.443269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:53.608 [2024-11-05 16:28:06.443334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:53.608 [2024-11-05 16:28:06.443374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:53.608 [2024-11-05 16:28:06.443435] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:53.608 [2024-11-05 16:28:06.443497] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:53.608 [2024-11-05 16:28:06.443533] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:53.608 [2024-11-05 16:28:06.443558] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:53.608 [2024-11-05 16:28:06.443574] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:53.608 [2024-11-05 16:28:06.443587] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:53.608 request: 00:13:53.608 { 00:13:53.608 "name": "raid_bdev1", 00:13:53.608 "raid_level": "raid1", 00:13:53.608 "base_bdevs": [ 00:13:53.608 "malloc1", 00:13:53.608 "malloc2", 00:13:53.608 "malloc3", 00:13:53.608 "malloc4" 00:13:53.608 ], 00:13:53.608 "superblock": false, 00:13:53.608 "method": "bdev_raid_create", 00:13:53.608 "req_id": 1 00:13:53.608 } 00:13:53.608 Got JSON-RPC error response 00:13:53.608 response: 00:13:53.608 { 00:13:53.608 "code": -17, 00:13:53.608 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:53.608 } 00:13:53.608 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:53.608 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:13:53.608 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:53.608 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:53.608 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:53.608 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.608 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:53.608 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.608 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.608 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.608 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:53.608 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:53.608 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:53.608 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.608 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.608 [2024-11-05 16:28:06.508953] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:53.608 [2024-11-05 16:28:06.509079] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.608 [2024-11-05 16:28:06.509121] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:53.608 [2024-11-05 16:28:06.509159] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.608 [2024-11-05 16:28:06.511724] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.608 [2024-11-05 16:28:06.511815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:53.608 [2024-11-05 16:28:06.511947] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:53.608 [2024-11-05 16:28:06.512047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:53.608 pt1 00:13:53.608 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.608 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:13:53.608 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:53.608 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:53.608 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:53.608 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:53.608 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:53.608 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.608 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.608 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.608 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.608 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.608 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.608 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.608 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.608 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.608 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.608 "name": "raid_bdev1", 00:13:53.608 "uuid": "fb4b6d6a-3ac1-4450-9954-6931604a2ac2", 00:13:53.608 "strip_size_kb": 0, 00:13:53.608 "state": "configuring", 00:13:53.608 "raid_level": "raid1", 00:13:53.608 "superblock": true, 00:13:53.608 "num_base_bdevs": 4, 00:13:53.609 "num_base_bdevs_discovered": 1, 00:13:53.609 "num_base_bdevs_operational": 4, 00:13:53.609 "base_bdevs_list": [ 00:13:53.609 { 00:13:53.609 "name": "pt1", 00:13:53.609 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:53.609 "is_configured": true, 00:13:53.609 "data_offset": 2048, 00:13:53.609 "data_size": 63488 00:13:53.609 }, 00:13:53.609 { 00:13:53.609 "name": null, 00:13:53.609 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:53.609 "is_configured": false, 00:13:53.609 "data_offset": 2048, 00:13:53.609 "data_size": 63488 00:13:53.609 }, 00:13:53.609 { 00:13:53.609 "name": null, 00:13:53.609 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:53.609 "is_configured": false, 00:13:53.609 "data_offset": 2048, 00:13:53.609 "data_size": 63488 00:13:53.609 }, 00:13:53.609 { 00:13:53.609 "name": null, 00:13:53.609 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:53.609 "is_configured": false, 00:13:53.609 "data_offset": 2048, 00:13:53.609 "data_size": 63488 00:13:53.609 } 00:13:53.609 ] 00:13:53.609 }' 00:13:53.609 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.609 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.176 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:13:54.176 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:54.176 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.176 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.176 [2024-11-05 16:28:06.964704] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:54.176 [2024-11-05 16:28:06.964832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.176 [2024-11-05 16:28:06.964860] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:54.176 [2024-11-05 16:28:06.964873] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.176 [2024-11-05 16:28:06.965364] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.176 [2024-11-05 16:28:06.965387] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:54.176 [2024-11-05 16:28:06.965481] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:54.176 [2024-11-05 16:28:06.965531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:54.176 pt2 00:13:54.176 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.176 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:54.176 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.176 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.176 [2024-11-05 16:28:06.972690] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:54.176 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.176 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:13:54.176 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.176 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:54.176 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:54.176 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:54.176 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:54.176 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.176 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.176 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.176 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.176 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.176 16:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.176 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.176 16:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.176 16:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.176 16:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.176 "name": "raid_bdev1", 00:13:54.176 "uuid": "fb4b6d6a-3ac1-4450-9954-6931604a2ac2", 00:13:54.176 "strip_size_kb": 0, 00:13:54.176 "state": "configuring", 00:13:54.176 "raid_level": "raid1", 00:13:54.176 "superblock": true, 00:13:54.176 "num_base_bdevs": 4, 00:13:54.176 "num_base_bdevs_discovered": 1, 00:13:54.176 "num_base_bdevs_operational": 4, 00:13:54.176 "base_bdevs_list": [ 00:13:54.176 { 00:13:54.176 "name": "pt1", 00:13:54.176 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:54.176 "is_configured": true, 00:13:54.176 "data_offset": 2048, 00:13:54.176 "data_size": 63488 00:13:54.176 }, 00:13:54.176 { 00:13:54.176 "name": null, 00:13:54.176 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:54.176 "is_configured": false, 00:13:54.176 "data_offset": 0, 00:13:54.176 "data_size": 63488 00:13:54.176 }, 00:13:54.176 { 00:13:54.176 "name": null, 00:13:54.176 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:54.176 "is_configured": false, 00:13:54.176 "data_offset": 2048, 00:13:54.176 "data_size": 63488 00:13:54.176 }, 00:13:54.176 { 00:13:54.176 "name": null, 00:13:54.176 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:54.176 "is_configured": false, 00:13:54.176 "data_offset": 2048, 00:13:54.176 "data_size": 63488 00:13:54.176 } 00:13:54.176 ] 00:13:54.176 }' 00:13:54.176 16:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.176 16:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.433 16:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:54.433 16:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:54.433 16:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:54.433 16:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.433 16:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.433 [2024-11-05 16:28:07.428716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:54.433 [2024-11-05 16:28:07.428868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.433 [2024-11-05 16:28:07.428905] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:54.433 [2024-11-05 16:28:07.428918] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.433 [2024-11-05 16:28:07.429438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.433 [2024-11-05 16:28:07.429457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:54.433 [2024-11-05 16:28:07.429575] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:54.433 [2024-11-05 16:28:07.429602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:54.433 pt2 00:13:54.433 16:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.433 16:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:54.433 16:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:54.433 16:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:54.433 16:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.433 16:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.433 [2024-11-05 16:28:07.440673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:54.433 [2024-11-05 16:28:07.440773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.433 [2024-11-05 16:28:07.440816] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:54.433 [2024-11-05 16:28:07.440851] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.433 [2024-11-05 16:28:07.441328] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.433 [2024-11-05 16:28:07.441389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:54.433 [2024-11-05 16:28:07.441510] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:54.433 [2024-11-05 16:28:07.441576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:54.433 pt3 00:13:54.433 16:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.433 16:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:54.433 16:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:54.433 16:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:54.433 16:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.433 16:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.433 [2024-11-05 16:28:07.452640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:54.433 [2024-11-05 16:28:07.452729] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.433 [2024-11-05 16:28:07.452769] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:54.433 [2024-11-05 16:28:07.452800] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.433 [2024-11-05 16:28:07.453260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.433 [2024-11-05 16:28:07.453322] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:54.433 [2024-11-05 16:28:07.453432] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:54.433 [2024-11-05 16:28:07.453483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:54.433 [2024-11-05 16:28:07.453699] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:54.433 [2024-11-05 16:28:07.453745] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:54.433 [2024-11-05 16:28:07.454062] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:54.433 [2024-11-05 16:28:07.454282] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:54.433 [2024-11-05 16:28:07.454342] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:54.433 [2024-11-05 16:28:07.454554] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.433 pt4 00:13:54.433 16:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.433 16:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:54.433 16:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:54.433 16:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:54.433 16:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.433 16:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.433 16:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:54.433 16:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:54.433 16:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:54.433 16:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.433 16:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.433 16:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.433 16:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.433 16:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.433 16:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.433 16:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.433 16:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.433 16:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.433 16:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.433 "name": "raid_bdev1", 00:13:54.433 "uuid": "fb4b6d6a-3ac1-4450-9954-6931604a2ac2", 00:13:54.433 "strip_size_kb": 0, 00:13:54.433 "state": "online", 00:13:54.433 "raid_level": "raid1", 00:13:54.433 "superblock": true, 00:13:54.433 "num_base_bdevs": 4, 00:13:54.433 "num_base_bdevs_discovered": 4, 00:13:54.433 "num_base_bdevs_operational": 4, 00:13:54.433 "base_bdevs_list": [ 00:13:54.433 { 00:13:54.433 "name": "pt1", 00:13:54.433 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:54.433 "is_configured": true, 00:13:54.433 "data_offset": 2048, 00:13:54.433 "data_size": 63488 00:13:54.433 }, 00:13:54.433 { 00:13:54.433 "name": "pt2", 00:13:54.433 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:54.434 "is_configured": true, 00:13:54.434 "data_offset": 2048, 00:13:54.434 "data_size": 63488 00:13:54.434 }, 00:13:54.434 { 00:13:54.434 "name": "pt3", 00:13:54.434 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:54.434 "is_configured": true, 00:13:54.434 "data_offset": 2048, 00:13:54.434 "data_size": 63488 00:13:54.434 }, 00:13:54.434 { 00:13:54.434 "name": "pt4", 00:13:54.434 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:54.434 "is_configured": true, 00:13:54.434 "data_offset": 2048, 00:13:54.434 "data_size": 63488 00:13:54.434 } 00:13:54.434 ] 00:13:54.434 }' 00:13:54.434 16:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.434 16:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.000 16:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:55.000 16:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:55.000 16:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:55.000 16:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:55.000 16:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:55.000 16:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:55.000 16:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:55.000 16:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.000 16:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.001 16:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:55.001 [2024-11-05 16:28:07.909064] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:55.001 16:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.001 16:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:55.001 "name": "raid_bdev1", 00:13:55.001 "aliases": [ 00:13:55.001 "fb4b6d6a-3ac1-4450-9954-6931604a2ac2" 00:13:55.001 ], 00:13:55.001 "product_name": "Raid Volume", 00:13:55.001 "block_size": 512, 00:13:55.001 "num_blocks": 63488, 00:13:55.001 "uuid": "fb4b6d6a-3ac1-4450-9954-6931604a2ac2", 00:13:55.001 "assigned_rate_limits": { 00:13:55.001 "rw_ios_per_sec": 0, 00:13:55.001 "rw_mbytes_per_sec": 0, 00:13:55.001 "r_mbytes_per_sec": 0, 00:13:55.001 "w_mbytes_per_sec": 0 00:13:55.001 }, 00:13:55.001 "claimed": false, 00:13:55.001 "zoned": false, 00:13:55.001 "supported_io_types": { 00:13:55.001 "read": true, 00:13:55.001 "write": true, 00:13:55.001 "unmap": false, 00:13:55.001 "flush": false, 00:13:55.001 "reset": true, 00:13:55.001 "nvme_admin": false, 00:13:55.001 "nvme_io": false, 00:13:55.001 "nvme_io_md": false, 00:13:55.001 "write_zeroes": true, 00:13:55.001 "zcopy": false, 00:13:55.001 "get_zone_info": false, 00:13:55.001 "zone_management": false, 00:13:55.001 "zone_append": false, 00:13:55.001 "compare": false, 00:13:55.001 "compare_and_write": false, 00:13:55.001 "abort": false, 00:13:55.001 "seek_hole": false, 00:13:55.001 "seek_data": false, 00:13:55.001 "copy": false, 00:13:55.001 "nvme_iov_md": false 00:13:55.001 }, 00:13:55.001 "memory_domains": [ 00:13:55.001 { 00:13:55.001 "dma_device_id": "system", 00:13:55.001 "dma_device_type": 1 00:13:55.001 }, 00:13:55.001 { 00:13:55.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.001 "dma_device_type": 2 00:13:55.001 }, 00:13:55.001 { 00:13:55.001 "dma_device_id": "system", 00:13:55.001 "dma_device_type": 1 00:13:55.001 }, 00:13:55.001 { 00:13:55.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.001 "dma_device_type": 2 00:13:55.001 }, 00:13:55.001 { 00:13:55.001 "dma_device_id": "system", 00:13:55.001 "dma_device_type": 1 00:13:55.001 }, 00:13:55.001 { 00:13:55.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.001 "dma_device_type": 2 00:13:55.001 }, 00:13:55.001 { 00:13:55.001 "dma_device_id": "system", 00:13:55.001 "dma_device_type": 1 00:13:55.001 }, 00:13:55.001 { 00:13:55.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.001 "dma_device_type": 2 00:13:55.001 } 00:13:55.001 ], 00:13:55.001 "driver_specific": { 00:13:55.001 "raid": { 00:13:55.001 "uuid": "fb4b6d6a-3ac1-4450-9954-6931604a2ac2", 00:13:55.001 "strip_size_kb": 0, 00:13:55.001 "state": "online", 00:13:55.001 "raid_level": "raid1", 00:13:55.001 "superblock": true, 00:13:55.001 "num_base_bdevs": 4, 00:13:55.001 "num_base_bdevs_discovered": 4, 00:13:55.001 "num_base_bdevs_operational": 4, 00:13:55.001 "base_bdevs_list": [ 00:13:55.001 { 00:13:55.001 "name": "pt1", 00:13:55.001 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:55.001 "is_configured": true, 00:13:55.001 "data_offset": 2048, 00:13:55.001 "data_size": 63488 00:13:55.001 }, 00:13:55.001 { 00:13:55.001 "name": "pt2", 00:13:55.001 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:55.001 "is_configured": true, 00:13:55.001 "data_offset": 2048, 00:13:55.001 "data_size": 63488 00:13:55.001 }, 00:13:55.001 { 00:13:55.001 "name": "pt3", 00:13:55.001 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:55.001 "is_configured": true, 00:13:55.001 "data_offset": 2048, 00:13:55.001 "data_size": 63488 00:13:55.001 }, 00:13:55.001 { 00:13:55.001 "name": "pt4", 00:13:55.001 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:55.001 "is_configured": true, 00:13:55.001 "data_offset": 2048, 00:13:55.001 "data_size": 63488 00:13:55.001 } 00:13:55.001 ] 00:13:55.001 } 00:13:55.001 } 00:13:55.001 }' 00:13:55.001 16:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:55.001 16:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:55.001 pt2 00:13:55.001 pt3 00:13:55.001 pt4' 00:13:55.001 16:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.001 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:55.001 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:55.001 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:55.001 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.001 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.001 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.001 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.259 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:55.259 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:55.259 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:55.259 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:55.259 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.259 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.259 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.259 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.259 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:55.259 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:55.259 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:55.259 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:55.259 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.259 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.259 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.259 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.259 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:55.259 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:55.259 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:55.259 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:55.259 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.259 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.259 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.259 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.260 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:55.260 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:55.260 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:55.260 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:55.260 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.260 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.260 [2024-11-05 16:28:08.277027] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:55.260 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.260 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' fb4b6d6a-3ac1-4450-9954-6931604a2ac2 '!=' fb4b6d6a-3ac1-4450-9954-6931604a2ac2 ']' 00:13:55.260 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:13:55.260 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:55.260 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:55.260 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:55.260 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.260 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.260 [2024-11-05 16:28:08.324758] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:55.260 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.260 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:55.260 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:55.260 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:55.260 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:55.260 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:55.260 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:55.260 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.260 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.260 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.260 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.260 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.260 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.260 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.260 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.519 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.519 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.519 "name": "raid_bdev1", 00:13:55.519 "uuid": "fb4b6d6a-3ac1-4450-9954-6931604a2ac2", 00:13:55.519 "strip_size_kb": 0, 00:13:55.519 "state": "online", 00:13:55.519 "raid_level": "raid1", 00:13:55.519 "superblock": true, 00:13:55.519 "num_base_bdevs": 4, 00:13:55.519 "num_base_bdevs_discovered": 3, 00:13:55.519 "num_base_bdevs_operational": 3, 00:13:55.519 "base_bdevs_list": [ 00:13:55.519 { 00:13:55.519 "name": null, 00:13:55.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.519 "is_configured": false, 00:13:55.519 "data_offset": 0, 00:13:55.519 "data_size": 63488 00:13:55.519 }, 00:13:55.519 { 00:13:55.519 "name": "pt2", 00:13:55.519 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:55.519 "is_configured": true, 00:13:55.519 "data_offset": 2048, 00:13:55.519 "data_size": 63488 00:13:55.519 }, 00:13:55.519 { 00:13:55.519 "name": "pt3", 00:13:55.519 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:55.519 "is_configured": true, 00:13:55.519 "data_offset": 2048, 00:13:55.519 "data_size": 63488 00:13:55.519 }, 00:13:55.519 { 00:13:55.519 "name": "pt4", 00:13:55.519 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:55.519 "is_configured": true, 00:13:55.519 "data_offset": 2048, 00:13:55.519 "data_size": 63488 00:13:55.519 } 00:13:55.519 ] 00:13:55.519 }' 00:13:55.519 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.519 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.779 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:55.779 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.779 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.779 [2024-11-05 16:28:08.756669] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:55.779 [2024-11-05 16:28:08.756707] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:55.779 [2024-11-05 16:28:08.756806] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:55.779 [2024-11-05 16:28:08.756896] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:55.779 [2024-11-05 16:28:08.756909] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:55.779 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.779 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.779 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.779 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.779 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:55.779 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.779 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:55.779 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:55.779 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:55.779 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:55.779 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:55.779 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.779 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.779 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.779 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:55.779 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:55.779 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:55.779 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.779 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.779 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.779 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:55.780 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:55.780 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:13:55.780 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.780 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.780 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.780 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:55.780 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:55.780 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:55.780 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:55.780 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:55.780 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.780 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.780 [2024-11-05 16:28:08.844659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:55.780 [2024-11-05 16:28:08.844719] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.780 [2024-11-05 16:28:08.844741] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:55.780 [2024-11-05 16:28:08.844751] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.780 [2024-11-05 16:28:08.847303] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.780 [2024-11-05 16:28:08.847349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:55.780 [2024-11-05 16:28:08.847449] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:55.780 [2024-11-05 16:28:08.847509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:55.780 pt2 00:13:55.780 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.780 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:55.780 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:55.780 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.780 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:55.780 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:55.780 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:55.780 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.780 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.780 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.780 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.780 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.780 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.780 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.780 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.039 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.039 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.039 "name": "raid_bdev1", 00:13:56.039 "uuid": "fb4b6d6a-3ac1-4450-9954-6931604a2ac2", 00:13:56.039 "strip_size_kb": 0, 00:13:56.039 "state": "configuring", 00:13:56.039 "raid_level": "raid1", 00:13:56.039 "superblock": true, 00:13:56.039 "num_base_bdevs": 4, 00:13:56.039 "num_base_bdevs_discovered": 1, 00:13:56.039 "num_base_bdevs_operational": 3, 00:13:56.039 "base_bdevs_list": [ 00:13:56.039 { 00:13:56.039 "name": null, 00:13:56.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.039 "is_configured": false, 00:13:56.039 "data_offset": 2048, 00:13:56.039 "data_size": 63488 00:13:56.039 }, 00:13:56.039 { 00:13:56.039 "name": "pt2", 00:13:56.039 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:56.039 "is_configured": true, 00:13:56.039 "data_offset": 2048, 00:13:56.039 "data_size": 63488 00:13:56.039 }, 00:13:56.039 { 00:13:56.039 "name": null, 00:13:56.039 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:56.039 "is_configured": false, 00:13:56.039 "data_offset": 2048, 00:13:56.039 "data_size": 63488 00:13:56.039 }, 00:13:56.039 { 00:13:56.039 "name": null, 00:13:56.039 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:56.039 "is_configured": false, 00:13:56.039 "data_offset": 2048, 00:13:56.039 "data_size": 63488 00:13:56.039 } 00:13:56.039 ] 00:13:56.039 }' 00:13:56.039 16:28:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.040 16:28:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.297 16:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:56.297 16:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:56.297 16:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:56.297 16:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.297 16:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.297 [2024-11-05 16:28:09.344695] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:56.297 [2024-11-05 16:28:09.344814] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.297 [2024-11-05 16:28:09.344870] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:56.298 [2024-11-05 16:28:09.344905] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.298 [2024-11-05 16:28:09.345442] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.298 [2024-11-05 16:28:09.345512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:56.298 [2024-11-05 16:28:09.345661] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:56.298 [2024-11-05 16:28:09.345718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:56.298 pt3 00:13:56.298 16:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.298 16:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:56.298 16:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.298 16:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:56.298 16:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.298 16:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.298 16:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:56.298 16:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.298 16:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.298 16:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.298 16:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.298 16:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.298 16:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.298 16:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.298 16:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.298 16:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.555 16:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.555 "name": "raid_bdev1", 00:13:56.555 "uuid": "fb4b6d6a-3ac1-4450-9954-6931604a2ac2", 00:13:56.555 "strip_size_kb": 0, 00:13:56.555 "state": "configuring", 00:13:56.555 "raid_level": "raid1", 00:13:56.555 "superblock": true, 00:13:56.555 "num_base_bdevs": 4, 00:13:56.555 "num_base_bdevs_discovered": 2, 00:13:56.555 "num_base_bdevs_operational": 3, 00:13:56.555 "base_bdevs_list": [ 00:13:56.555 { 00:13:56.555 "name": null, 00:13:56.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.555 "is_configured": false, 00:13:56.555 "data_offset": 2048, 00:13:56.555 "data_size": 63488 00:13:56.555 }, 00:13:56.555 { 00:13:56.555 "name": "pt2", 00:13:56.555 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:56.555 "is_configured": true, 00:13:56.555 "data_offset": 2048, 00:13:56.555 "data_size": 63488 00:13:56.555 }, 00:13:56.555 { 00:13:56.555 "name": "pt3", 00:13:56.555 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:56.555 "is_configured": true, 00:13:56.555 "data_offset": 2048, 00:13:56.555 "data_size": 63488 00:13:56.555 }, 00:13:56.555 { 00:13:56.555 "name": null, 00:13:56.555 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:56.555 "is_configured": false, 00:13:56.555 "data_offset": 2048, 00:13:56.555 "data_size": 63488 00:13:56.555 } 00:13:56.555 ] 00:13:56.556 }' 00:13:56.556 16:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.556 16:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.813 16:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:56.813 16:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:56.813 16:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:13:56.813 16:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:56.813 16:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.813 16:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.813 [2024-11-05 16:28:09.800715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:56.813 [2024-11-05 16:28:09.800792] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.813 [2024-11-05 16:28:09.800818] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:13:56.813 [2024-11-05 16:28:09.800828] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.813 [2024-11-05 16:28:09.801326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.813 [2024-11-05 16:28:09.801345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:56.813 [2024-11-05 16:28:09.801441] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:56.813 [2024-11-05 16:28:09.801472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:56.813 [2024-11-05 16:28:09.801680] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:56.813 [2024-11-05 16:28:09.801691] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:56.813 [2024-11-05 16:28:09.801972] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:56.813 [2024-11-05 16:28:09.802159] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:56.813 [2024-11-05 16:28:09.802173] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:56.813 [2024-11-05 16:28:09.802344] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.813 pt4 00:13:56.813 16:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.813 16:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:56.813 16:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.813 16:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.813 16:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.813 16:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.813 16:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:56.813 16:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.813 16:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.813 16:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.813 16:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.813 16:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.813 16:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.813 16:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.813 16:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.813 16:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.813 16:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.813 "name": "raid_bdev1", 00:13:56.813 "uuid": "fb4b6d6a-3ac1-4450-9954-6931604a2ac2", 00:13:56.813 "strip_size_kb": 0, 00:13:56.813 "state": "online", 00:13:56.813 "raid_level": "raid1", 00:13:56.813 "superblock": true, 00:13:56.813 "num_base_bdevs": 4, 00:13:56.813 "num_base_bdevs_discovered": 3, 00:13:56.813 "num_base_bdevs_operational": 3, 00:13:56.813 "base_bdevs_list": [ 00:13:56.813 { 00:13:56.813 "name": null, 00:13:56.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.813 "is_configured": false, 00:13:56.813 "data_offset": 2048, 00:13:56.813 "data_size": 63488 00:13:56.813 }, 00:13:56.813 { 00:13:56.813 "name": "pt2", 00:13:56.813 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:56.813 "is_configured": true, 00:13:56.813 "data_offset": 2048, 00:13:56.813 "data_size": 63488 00:13:56.813 }, 00:13:56.813 { 00:13:56.813 "name": "pt3", 00:13:56.813 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:56.813 "is_configured": true, 00:13:56.813 "data_offset": 2048, 00:13:56.813 "data_size": 63488 00:13:56.813 }, 00:13:56.813 { 00:13:56.813 "name": "pt4", 00:13:56.813 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:56.813 "is_configured": true, 00:13:56.813 "data_offset": 2048, 00:13:56.813 "data_size": 63488 00:13:56.813 } 00:13:56.814 ] 00:13:56.814 }' 00:13:56.814 16:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.814 16:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.379 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:57.379 16:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.379 16:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.379 [2024-11-05 16:28:10.264664] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:57.379 [2024-11-05 16:28:10.264752] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:57.379 [2024-11-05 16:28:10.264874] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:57.380 [2024-11-05 16:28:10.264992] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:57.380 [2024-11-05 16:28:10.265050] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:57.380 16:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.380 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:57.380 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.380 16:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.380 16:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.380 16:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.380 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:57.380 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:57.380 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:13:57.380 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:13:57.380 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:13:57.380 16:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.380 16:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.380 16:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.380 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:57.380 16:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.380 16:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.380 [2024-11-05 16:28:10.320657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:57.380 [2024-11-05 16:28:10.320778] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.380 [2024-11-05 16:28:10.320833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:13:57.380 [2024-11-05 16:28:10.320871] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.380 [2024-11-05 16:28:10.323438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.380 [2024-11-05 16:28:10.323547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:57.380 [2024-11-05 16:28:10.323689] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:57.380 [2024-11-05 16:28:10.323779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:57.380 [2024-11-05 16:28:10.323972] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:57.380 [2024-11-05 16:28:10.324035] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:57.380 [2024-11-05 16:28:10.324071] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:13:57.380 [2024-11-05 16:28:10.324201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:57.380 [2024-11-05 16:28:10.324372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:57.380 pt1 00:13:57.380 16:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.380 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:13:57.380 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:57.380 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:57.380 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:57.380 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:57.380 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:57.380 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:57.380 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.380 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.380 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.380 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.380 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.380 16:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.380 16:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.380 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.380 16:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.380 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.380 "name": "raid_bdev1", 00:13:57.380 "uuid": "fb4b6d6a-3ac1-4450-9954-6931604a2ac2", 00:13:57.380 "strip_size_kb": 0, 00:13:57.380 "state": "configuring", 00:13:57.380 "raid_level": "raid1", 00:13:57.380 "superblock": true, 00:13:57.380 "num_base_bdevs": 4, 00:13:57.380 "num_base_bdevs_discovered": 2, 00:13:57.380 "num_base_bdevs_operational": 3, 00:13:57.380 "base_bdevs_list": [ 00:13:57.380 { 00:13:57.380 "name": null, 00:13:57.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.380 "is_configured": false, 00:13:57.380 "data_offset": 2048, 00:13:57.380 "data_size": 63488 00:13:57.380 }, 00:13:57.380 { 00:13:57.380 "name": "pt2", 00:13:57.380 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:57.380 "is_configured": true, 00:13:57.380 "data_offset": 2048, 00:13:57.380 "data_size": 63488 00:13:57.380 }, 00:13:57.380 { 00:13:57.380 "name": "pt3", 00:13:57.380 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:57.380 "is_configured": true, 00:13:57.380 "data_offset": 2048, 00:13:57.380 "data_size": 63488 00:13:57.380 }, 00:13:57.380 { 00:13:57.380 "name": null, 00:13:57.380 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:57.380 "is_configured": false, 00:13:57.380 "data_offset": 2048, 00:13:57.380 "data_size": 63488 00:13:57.380 } 00:13:57.380 ] 00:13:57.380 }' 00:13:57.380 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.380 16:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.947 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:57.947 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:57.947 16:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.947 16:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.947 16:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.947 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:57.947 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:57.948 16:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.948 16:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.948 [2024-11-05 16:28:10.804683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:57.948 [2024-11-05 16:28:10.804797] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.948 [2024-11-05 16:28:10.804853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:13:57.948 [2024-11-05 16:28:10.804888] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.948 [2024-11-05 16:28:10.805409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.948 [2024-11-05 16:28:10.805441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:57.948 [2024-11-05 16:28:10.805557] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:57.948 [2024-11-05 16:28:10.805594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:57.948 [2024-11-05 16:28:10.805777] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:13:57.948 [2024-11-05 16:28:10.805787] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:57.948 [2024-11-05 16:28:10.806078] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:57.948 [2024-11-05 16:28:10.806262] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:13:57.948 [2024-11-05 16:28:10.806275] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:13:57.948 [2024-11-05 16:28:10.806451] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:57.948 pt4 00:13:57.948 16:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.948 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:57.948 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:57.948 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.948 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:57.948 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:57.948 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:57.948 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.948 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.948 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.948 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.948 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.948 16:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.948 16:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.948 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.948 16:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.948 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.948 "name": "raid_bdev1", 00:13:57.948 "uuid": "fb4b6d6a-3ac1-4450-9954-6931604a2ac2", 00:13:57.948 "strip_size_kb": 0, 00:13:57.948 "state": "online", 00:13:57.948 "raid_level": "raid1", 00:13:57.948 "superblock": true, 00:13:57.948 "num_base_bdevs": 4, 00:13:57.948 "num_base_bdevs_discovered": 3, 00:13:57.948 "num_base_bdevs_operational": 3, 00:13:57.948 "base_bdevs_list": [ 00:13:57.948 { 00:13:57.948 "name": null, 00:13:57.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.948 "is_configured": false, 00:13:57.948 "data_offset": 2048, 00:13:57.948 "data_size": 63488 00:13:57.948 }, 00:13:57.948 { 00:13:57.948 "name": "pt2", 00:13:57.948 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:57.948 "is_configured": true, 00:13:57.948 "data_offset": 2048, 00:13:57.948 "data_size": 63488 00:13:57.948 }, 00:13:57.948 { 00:13:57.948 "name": "pt3", 00:13:57.948 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:57.948 "is_configured": true, 00:13:57.948 "data_offset": 2048, 00:13:57.948 "data_size": 63488 00:13:57.948 }, 00:13:57.948 { 00:13:57.948 "name": "pt4", 00:13:57.948 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:57.948 "is_configured": true, 00:13:57.948 "data_offset": 2048, 00:13:57.948 "data_size": 63488 00:13:57.948 } 00:13:57.948 ] 00:13:57.948 }' 00:13:57.948 16:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.948 16:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.209 16:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:58.209 16:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.209 16:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.209 16:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:58.209 16:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.209 16:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:58.209 16:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:58.209 16:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:58.209 16:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.209 16:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.209 [2024-11-05 16:28:11.277033] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:58.209 16:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.466 16:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' fb4b6d6a-3ac1-4450-9954-6931604a2ac2 '!=' fb4b6d6a-3ac1-4450-9954-6931604a2ac2 ']' 00:13:58.466 16:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74861 00:13:58.466 16:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 74861 ']' 00:13:58.466 16:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 74861 00:13:58.466 16:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:13:58.466 16:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:58.466 16:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74861 00:13:58.466 16:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:58.466 16:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:58.466 16:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74861' 00:13:58.466 killing process with pid 74861 00:13:58.466 16:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 74861 00:13:58.466 [2024-11-05 16:28:11.339302] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:58.466 [2024-11-05 16:28:11.339475] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:58.466 16:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 74861 00:13:58.466 [2024-11-05 16:28:11.339619] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:58.466 [2024-11-05 16:28:11.339639] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:13:59.031 [2024-11-05 16:28:11.826850] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:00.405 16:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:00.405 00:14:00.405 real 0m9.025s 00:14:00.405 user 0m14.181s 00:14:00.405 sys 0m1.343s 00:14:00.405 16:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:00.405 ************************************ 00:14:00.405 END TEST raid_superblock_test 00:14:00.405 ************************************ 00:14:00.405 16:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.405 16:28:13 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:14:00.405 16:28:13 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:00.405 16:28:13 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:00.405 16:28:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:00.405 ************************************ 00:14:00.405 START TEST raid_read_error_test 00:14:00.405 ************************************ 00:14:00.405 16:28:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 4 read 00:14:00.405 16:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:14:00.405 16:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:00.405 16:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:00.405 16:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:00.405 16:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:00.405 16:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:00.405 16:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:00.405 16:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:00.405 16:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:00.405 16:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:00.405 16:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:00.405 16:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:00.405 16:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:00.405 16:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:00.405 16:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:00.405 16:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:00.405 16:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:00.405 16:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:00.405 16:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:00.405 16:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:00.405 16:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:00.405 16:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:00.405 16:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:00.405 16:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:00.405 16:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:14:00.405 16:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:14:00.405 16:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:00.405 16:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.1piYwulcdV 00:14:00.405 16:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:00.405 16:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75354 00:14:00.405 16:28:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75354 00:14:00.405 16:28:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 75354 ']' 00:14:00.405 16:28:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.405 16:28:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:00.405 16:28:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.405 16:28:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:00.405 16:28:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.405 [2024-11-05 16:28:13.321578] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:14:00.405 [2024-11-05 16:28:13.321875] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75354 ] 00:14:00.663 [2024-11-05 16:28:13.586231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.663 [2024-11-05 16:28:13.718811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.922 [2024-11-05 16:28:13.960675] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:00.922 [2024-11-05 16:28:13.960864] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:01.488 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:01.488 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:14:01.488 16:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:01.488 16:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:01.488 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.488 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.488 BaseBdev1_malloc 00:14:01.488 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.488 16:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:01.488 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.488 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.488 true 00:14:01.488 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.488 16:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:01.488 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.488 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.488 [2024-11-05 16:28:14.338465] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:01.488 [2024-11-05 16:28:14.338547] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.488 [2024-11-05 16:28:14.338572] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:01.488 [2024-11-05 16:28:14.338586] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.488 [2024-11-05 16:28:14.341041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.488 [2024-11-05 16:28:14.341158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:01.488 BaseBdev1 00:14:01.488 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.488 16:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:01.488 16:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:01.488 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.488 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.489 BaseBdev2_malloc 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.489 true 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.489 [2024-11-05 16:28:14.402215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:01.489 [2024-11-05 16:28:14.402282] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.489 [2024-11-05 16:28:14.402302] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:01.489 [2024-11-05 16:28:14.402315] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.489 [2024-11-05 16:28:14.404757] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.489 [2024-11-05 16:28:14.404802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:01.489 BaseBdev2 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.489 BaseBdev3_malloc 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.489 true 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.489 [2024-11-05 16:28:14.485673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:01.489 [2024-11-05 16:28:14.485736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.489 [2024-11-05 16:28:14.485758] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:01.489 [2024-11-05 16:28:14.485771] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.489 [2024-11-05 16:28:14.488228] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.489 [2024-11-05 16:28:14.488274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:01.489 BaseBdev3 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.489 BaseBdev4_malloc 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.489 true 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.489 [2024-11-05 16:28:14.561560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:01.489 [2024-11-05 16:28:14.561621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.489 [2024-11-05 16:28:14.561644] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:01.489 [2024-11-05 16:28:14.561657] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.489 [2024-11-05 16:28:14.564087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.489 [2024-11-05 16:28:14.564134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:01.489 BaseBdev4 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.489 [2024-11-05 16:28:14.569614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:01.489 [2024-11-05 16:28:14.571733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:01.489 [2024-11-05 16:28:14.571823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:01.489 [2024-11-05 16:28:14.571899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:01.489 [2024-11-05 16:28:14.572176] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:01.489 [2024-11-05 16:28:14.572192] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:01.489 [2024-11-05 16:28:14.572471] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:14:01.489 [2024-11-05 16:28:14.572701] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:01.489 [2024-11-05 16:28:14.572713] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:01.489 [2024-11-05 16:28:14.572906] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.489 16:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.747 16:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.747 16:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.747 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.747 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.747 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.747 16:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.747 "name": "raid_bdev1", 00:14:01.747 "uuid": "7bfe6890-6913-4ad0-b891-362c2157e8f3", 00:14:01.747 "strip_size_kb": 0, 00:14:01.747 "state": "online", 00:14:01.747 "raid_level": "raid1", 00:14:01.747 "superblock": true, 00:14:01.747 "num_base_bdevs": 4, 00:14:01.747 "num_base_bdevs_discovered": 4, 00:14:01.747 "num_base_bdevs_operational": 4, 00:14:01.747 "base_bdevs_list": [ 00:14:01.747 { 00:14:01.747 "name": "BaseBdev1", 00:14:01.747 "uuid": "69312c99-629c-5e16-8e82-ccd7c5b64f88", 00:14:01.747 "is_configured": true, 00:14:01.747 "data_offset": 2048, 00:14:01.747 "data_size": 63488 00:14:01.747 }, 00:14:01.747 { 00:14:01.747 "name": "BaseBdev2", 00:14:01.747 "uuid": "238f3827-89f4-5e35-8b1e-f855abc001a0", 00:14:01.747 "is_configured": true, 00:14:01.747 "data_offset": 2048, 00:14:01.747 "data_size": 63488 00:14:01.747 }, 00:14:01.747 { 00:14:01.747 "name": "BaseBdev3", 00:14:01.747 "uuid": "c3f730bd-3a66-52d2-8e31-a1c85e50a2ec", 00:14:01.747 "is_configured": true, 00:14:01.747 "data_offset": 2048, 00:14:01.747 "data_size": 63488 00:14:01.747 }, 00:14:01.747 { 00:14:01.747 "name": "BaseBdev4", 00:14:01.747 "uuid": "f72a7d17-04d8-5297-b71a-8c5b41d235b1", 00:14:01.747 "is_configured": true, 00:14:01.747 "data_offset": 2048, 00:14:01.747 "data_size": 63488 00:14:01.747 } 00:14:01.747 ] 00:14:01.747 }' 00:14:01.747 16:28:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.747 16:28:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.006 16:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:02.006 16:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:02.264 [2024-11-05 16:28:15.154471] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:14:03.198 16:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:03.198 16:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.198 16:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.198 16:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.198 16:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:03.198 16:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:03.198 16:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:14:03.198 16:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:14:03.198 16:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:03.198 16:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.198 16:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.198 16:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:03.198 16:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:03.198 16:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:03.198 16:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.198 16:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.198 16:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.198 16:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.198 16:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.198 16:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.198 16:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.198 16:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.198 16:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.198 16:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.198 "name": "raid_bdev1", 00:14:03.198 "uuid": "7bfe6890-6913-4ad0-b891-362c2157e8f3", 00:14:03.198 "strip_size_kb": 0, 00:14:03.198 "state": "online", 00:14:03.198 "raid_level": "raid1", 00:14:03.198 "superblock": true, 00:14:03.198 "num_base_bdevs": 4, 00:14:03.198 "num_base_bdevs_discovered": 4, 00:14:03.198 "num_base_bdevs_operational": 4, 00:14:03.198 "base_bdevs_list": [ 00:14:03.198 { 00:14:03.198 "name": "BaseBdev1", 00:14:03.198 "uuid": "69312c99-629c-5e16-8e82-ccd7c5b64f88", 00:14:03.198 "is_configured": true, 00:14:03.198 "data_offset": 2048, 00:14:03.198 "data_size": 63488 00:14:03.198 }, 00:14:03.198 { 00:14:03.198 "name": "BaseBdev2", 00:14:03.198 "uuid": "238f3827-89f4-5e35-8b1e-f855abc001a0", 00:14:03.198 "is_configured": true, 00:14:03.198 "data_offset": 2048, 00:14:03.198 "data_size": 63488 00:14:03.198 }, 00:14:03.198 { 00:14:03.198 "name": "BaseBdev3", 00:14:03.198 "uuid": "c3f730bd-3a66-52d2-8e31-a1c85e50a2ec", 00:14:03.198 "is_configured": true, 00:14:03.198 "data_offset": 2048, 00:14:03.198 "data_size": 63488 00:14:03.198 }, 00:14:03.198 { 00:14:03.198 "name": "BaseBdev4", 00:14:03.198 "uuid": "f72a7d17-04d8-5297-b71a-8c5b41d235b1", 00:14:03.198 "is_configured": true, 00:14:03.198 "data_offset": 2048, 00:14:03.198 "data_size": 63488 00:14:03.198 } 00:14:03.198 ] 00:14:03.198 }' 00:14:03.198 16:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.198 16:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.456 16:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:03.456 16:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.456 16:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.456 [2024-11-05 16:28:16.502311] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:03.456 [2024-11-05 16:28:16.502422] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:03.456 [2024-11-05 16:28:16.505702] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:03.456 [2024-11-05 16:28:16.505771] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:03.456 [2024-11-05 16:28:16.505911] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:03.456 [2024-11-05 16:28:16.505925] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:03.456 { 00:14:03.456 "results": [ 00:14:03.456 { 00:14:03.456 "job": "raid_bdev1", 00:14:03.456 "core_mask": "0x1", 00:14:03.456 "workload": "randrw", 00:14:03.456 "percentage": 50, 00:14:03.456 "status": "finished", 00:14:03.456 "queue_depth": 1, 00:14:03.456 "io_size": 131072, 00:14:03.456 "runtime": 1.348236, 00:14:03.456 "iops": 9280.2743733293, 00:14:03.456 "mibps": 1160.0342966661624, 00:14:03.456 "io_failed": 0, 00:14:03.456 "io_timeout": 0, 00:14:03.456 "avg_latency_us": 104.46863154603022, 00:14:03.456 "min_latency_us": 30.406986899563318, 00:14:03.456 "max_latency_us": 1888.810480349345 00:14:03.456 } 00:14:03.456 ], 00:14:03.456 "core_count": 1 00:14:03.456 } 00:14:03.456 16:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.456 16:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75354 00:14:03.456 16:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 75354 ']' 00:14:03.456 16:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 75354 00:14:03.456 16:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:14:03.456 16:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:03.456 16:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75354 00:14:03.456 killing process with pid 75354 00:14:03.456 16:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:03.456 16:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:03.456 16:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75354' 00:14:03.456 16:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 75354 00:14:03.456 [2024-11-05 16:28:16.533484] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:03.456 16:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 75354 00:14:04.021 [2024-11-05 16:28:16.935243] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:05.392 16:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.1piYwulcdV 00:14:05.392 16:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:05.392 16:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:05.392 16:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:14:05.392 16:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:14:05.392 16:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:05.392 16:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:05.392 16:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:05.392 00:14:05.392 real 0m5.157s 00:14:05.392 user 0m6.168s 00:14:05.392 sys 0m0.545s 00:14:05.392 16:28:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:05.392 ************************************ 00:14:05.392 END TEST raid_read_error_test 00:14:05.392 ************************************ 00:14:05.392 16:28:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.392 16:28:18 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:14:05.392 16:28:18 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:05.392 16:28:18 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:05.392 16:28:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:05.392 ************************************ 00:14:05.392 START TEST raid_write_error_test 00:14:05.392 ************************************ 00:14:05.392 16:28:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 4 write 00:14:05.392 16:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:14:05.392 16:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:05.392 16:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:05.392 16:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:05.392 16:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:05.392 16:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:05.392 16:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:05.392 16:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:05.392 16:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:05.392 16:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:05.392 16:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:05.392 16:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:05.392 16:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:05.392 16:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:05.392 16:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:05.392 16:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:05.392 16:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:05.392 16:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:05.392 16:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:05.392 16:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:05.392 16:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:05.392 16:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:05.392 16:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:05.392 16:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:05.392 16:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:14:05.392 16:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:14:05.392 16:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:05.392 16:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.TrVzvhOzFZ 00:14:05.392 16:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75505 00:14:05.392 16:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:05.392 16:28:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75505 00:14:05.392 16:28:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 75505 ']' 00:14:05.392 16:28:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.392 16:28:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:05.392 16:28:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.392 16:28:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:05.392 16:28:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.649 [2024-11-05 16:28:18.513176] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:14:05.649 [2024-11-05 16:28:18.513400] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75505 ] 00:14:05.649 [2024-11-05 16:28:18.694982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.943 [2024-11-05 16:28:18.842600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.200 [2024-11-05 16:28:19.096589] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:06.200 [2024-11-05 16:28:19.096693] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:06.460 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:06.460 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:14:06.460 16:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:06.460 16:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:06.460 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.460 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.460 BaseBdev1_malloc 00:14:06.460 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.460 16:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:06.460 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.460 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.460 true 00:14:06.460 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.460 16:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:06.460 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.460 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.460 [2024-11-05 16:28:19.470991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:06.460 [2024-11-05 16:28:19.471124] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.460 [2024-11-05 16:28:19.471201] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:06.460 [2024-11-05 16:28:19.471259] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.460 [2024-11-05 16:28:19.474051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.460 [2024-11-05 16:28:19.474145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:06.460 BaseBdev1 00:14:06.460 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.460 16:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:06.460 16:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:06.460 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.460 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.460 BaseBdev2_malloc 00:14:06.460 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.460 16:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:06.460 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.461 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.461 true 00:14:06.461 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.461 16:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:06.461 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.461 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.721 [2024-11-05 16:28:19.549365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:06.721 [2024-11-05 16:28:19.549424] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.721 [2024-11-05 16:28:19.549444] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:06.721 [2024-11-05 16:28:19.549457] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.721 [2024-11-05 16:28:19.551881] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.721 [2024-11-05 16:28:19.551920] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:06.721 BaseBdev2 00:14:06.721 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.721 16:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:06.721 16:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:06.721 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.721 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.721 BaseBdev3_malloc 00:14:06.721 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.721 16:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:06.721 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.721 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.721 true 00:14:06.721 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.721 16:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:06.721 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.721 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.721 [2024-11-05 16:28:19.626116] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:06.721 [2024-11-05 16:28:19.626176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.721 [2024-11-05 16:28:19.626198] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:06.721 [2024-11-05 16:28:19.626211] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.721 [2024-11-05 16:28:19.628698] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.721 [2024-11-05 16:28:19.628741] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:06.721 BaseBdev3 00:14:06.721 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.721 16:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:06.721 16:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:06.721 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.721 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.721 BaseBdev4_malloc 00:14:06.721 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.721 16:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:06.721 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.721 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.721 true 00:14:06.721 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.721 16:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:06.721 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.721 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.721 [2024-11-05 16:28:19.688440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:06.721 [2024-11-05 16:28:19.688499] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.721 [2024-11-05 16:28:19.688542] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:06.721 [2024-11-05 16:28:19.688556] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.721 [2024-11-05 16:28:19.690994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.722 [2024-11-05 16:28:19.691041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:06.722 BaseBdev4 00:14:06.722 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.722 16:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:06.722 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.722 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.722 [2024-11-05 16:28:19.696506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:06.722 [2024-11-05 16:28:19.698663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:06.722 [2024-11-05 16:28:19.698777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:06.722 [2024-11-05 16:28:19.698857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:06.722 [2024-11-05 16:28:19.699127] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:06.722 [2024-11-05 16:28:19.699152] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:06.722 [2024-11-05 16:28:19.699456] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:14:06.722 [2024-11-05 16:28:19.699677] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:06.722 [2024-11-05 16:28:19.699697] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:06.722 [2024-11-05 16:28:19.699896] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.722 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.722 16:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:06.722 16:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.722 16:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.722 16:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.722 16:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.722 16:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:06.722 16:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.722 16:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.722 16:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.722 16:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.722 16:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.722 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.722 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.722 16:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.722 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.722 16:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.722 "name": "raid_bdev1", 00:14:06.722 "uuid": "2b4dd6a4-b444-478f-8cd7-2243fbceb7ce", 00:14:06.722 "strip_size_kb": 0, 00:14:06.722 "state": "online", 00:14:06.722 "raid_level": "raid1", 00:14:06.722 "superblock": true, 00:14:06.722 "num_base_bdevs": 4, 00:14:06.722 "num_base_bdevs_discovered": 4, 00:14:06.722 "num_base_bdevs_operational": 4, 00:14:06.722 "base_bdevs_list": [ 00:14:06.722 { 00:14:06.722 "name": "BaseBdev1", 00:14:06.722 "uuid": "a9c2b3c9-3c7e-516f-a09f-0452dac47eb7", 00:14:06.722 "is_configured": true, 00:14:06.722 "data_offset": 2048, 00:14:06.722 "data_size": 63488 00:14:06.722 }, 00:14:06.722 { 00:14:06.722 "name": "BaseBdev2", 00:14:06.722 "uuid": "4b3321d3-29b6-5541-8998-25bdf61bf2db", 00:14:06.722 "is_configured": true, 00:14:06.722 "data_offset": 2048, 00:14:06.722 "data_size": 63488 00:14:06.722 }, 00:14:06.722 { 00:14:06.722 "name": "BaseBdev3", 00:14:06.722 "uuid": "15231edb-3b5f-5593-9dd9-bc0d733e8aa7", 00:14:06.722 "is_configured": true, 00:14:06.722 "data_offset": 2048, 00:14:06.722 "data_size": 63488 00:14:06.722 }, 00:14:06.722 { 00:14:06.722 "name": "BaseBdev4", 00:14:06.722 "uuid": "8379c272-6728-571b-86d6-23a03439f6ec", 00:14:06.722 "is_configured": true, 00:14:06.722 "data_offset": 2048, 00:14:06.722 "data_size": 63488 00:14:06.722 } 00:14:06.722 ] 00:14:06.722 }' 00:14:06.722 16:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.722 16:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.289 16:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:07.289 16:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:07.289 [2024-11-05 16:28:20.249147] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:14:08.225 16:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:08.225 16:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.225 16:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.225 [2024-11-05 16:28:21.120136] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:14:08.225 [2024-11-05 16:28:21.120197] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:08.225 [2024-11-05 16:28:21.120449] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:14:08.225 16:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.225 16:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:08.225 16:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:08.225 16:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:14:08.225 16:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:14:08.225 16:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:08.225 16:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.225 16:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.225 16:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:08.225 16:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:08.225 16:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:08.225 16:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.225 16:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.225 16:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.225 16:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.225 16:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.225 16:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.225 16:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.225 16:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.225 16:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.225 16:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.225 "name": "raid_bdev1", 00:14:08.225 "uuid": "2b4dd6a4-b444-478f-8cd7-2243fbceb7ce", 00:14:08.225 "strip_size_kb": 0, 00:14:08.225 "state": "online", 00:14:08.225 "raid_level": "raid1", 00:14:08.225 "superblock": true, 00:14:08.225 "num_base_bdevs": 4, 00:14:08.225 "num_base_bdevs_discovered": 3, 00:14:08.225 "num_base_bdevs_operational": 3, 00:14:08.225 "base_bdevs_list": [ 00:14:08.225 { 00:14:08.225 "name": null, 00:14:08.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.225 "is_configured": false, 00:14:08.225 "data_offset": 0, 00:14:08.225 "data_size": 63488 00:14:08.225 }, 00:14:08.225 { 00:14:08.225 "name": "BaseBdev2", 00:14:08.225 "uuid": "4b3321d3-29b6-5541-8998-25bdf61bf2db", 00:14:08.225 "is_configured": true, 00:14:08.225 "data_offset": 2048, 00:14:08.225 "data_size": 63488 00:14:08.225 }, 00:14:08.225 { 00:14:08.225 "name": "BaseBdev3", 00:14:08.225 "uuid": "15231edb-3b5f-5593-9dd9-bc0d733e8aa7", 00:14:08.225 "is_configured": true, 00:14:08.225 "data_offset": 2048, 00:14:08.225 "data_size": 63488 00:14:08.225 }, 00:14:08.225 { 00:14:08.225 "name": "BaseBdev4", 00:14:08.225 "uuid": "8379c272-6728-571b-86d6-23a03439f6ec", 00:14:08.225 "is_configured": true, 00:14:08.225 "data_offset": 2048, 00:14:08.225 "data_size": 63488 00:14:08.225 } 00:14:08.225 ] 00:14:08.225 }' 00:14:08.225 16:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.225 16:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.793 16:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:08.793 16:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.793 16:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.793 [2024-11-05 16:28:21.617181] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:08.793 [2024-11-05 16:28:21.617222] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:08.793 [2024-11-05 16:28:21.620458] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:08.793 [2024-11-05 16:28:21.620534] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.793 [2024-11-05 16:28:21.620666] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:08.793 [2024-11-05 16:28:21.620682] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:08.793 { 00:14:08.793 "results": [ 00:14:08.793 { 00:14:08.793 "job": "raid_bdev1", 00:14:08.793 "core_mask": "0x1", 00:14:08.793 "workload": "randrw", 00:14:08.794 "percentage": 50, 00:14:08.794 "status": "finished", 00:14:08.794 "queue_depth": 1, 00:14:08.794 "io_size": 131072, 00:14:08.794 "runtime": 1.368626, 00:14:08.794 "iops": 10154.709906139442, 00:14:08.794 "mibps": 1269.3387382674302, 00:14:08.794 "io_failed": 0, 00:14:08.794 "io_timeout": 0, 00:14:08.794 "avg_latency_us": 95.1447166222277, 00:14:08.794 "min_latency_us": 30.183406113537117, 00:14:08.794 "max_latency_us": 1717.1004366812226 00:14:08.794 } 00:14:08.794 ], 00:14:08.794 "core_count": 1 00:14:08.794 } 00:14:08.794 16:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.794 16:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75505 00:14:08.794 16:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 75505 ']' 00:14:08.794 16:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 75505 00:14:08.794 16:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:14:08.794 16:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:08.794 16:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75505 00:14:08.794 16:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:08.794 16:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:08.794 killing process with pid 75505 00:14:08.794 16:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75505' 00:14:08.794 16:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 75505 00:14:08.794 [2024-11-05 16:28:21.651839] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:08.794 16:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 75505 00:14:09.052 [2024-11-05 16:28:22.047524] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:10.429 16:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.TrVzvhOzFZ 00:14:10.429 16:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:10.429 16:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:10.429 16:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:14:10.429 16:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:14:10.429 16:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:10.429 16:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:10.430 16:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:10.430 00:14:10.430 real 0m5.044s 00:14:10.430 user 0m5.943s 00:14:10.430 sys 0m0.589s 00:14:10.430 16:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:10.430 16:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.430 ************************************ 00:14:10.430 END TEST raid_write_error_test 00:14:10.430 ************************************ 00:14:10.430 16:28:23 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:14:10.430 16:28:23 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:14:10.430 16:28:23 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:14:10.430 16:28:23 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:14:10.430 16:28:23 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:10.430 16:28:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:10.430 ************************************ 00:14:10.430 START TEST raid_rebuild_test 00:14:10.430 ************************************ 00:14:10.430 16:28:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 false false true 00:14:10.430 16:28:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:10.430 16:28:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:10.430 16:28:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:10.430 16:28:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:10.430 16:28:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:10.689 16:28:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:10.689 16:28:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:10.689 16:28:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:10.689 16:28:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:10.689 16:28:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:10.689 16:28:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:10.689 16:28:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:10.689 16:28:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:10.689 16:28:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:10.689 16:28:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:10.689 16:28:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:10.689 16:28:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:10.689 16:28:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:10.689 16:28:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:10.689 16:28:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:10.689 16:28:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:10.689 16:28:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:10.689 16:28:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:10.689 16:28:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:10.689 16:28:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75649 00:14:10.689 16:28:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75649 00:14:10.689 16:28:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 75649 ']' 00:14:10.689 16:28:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.689 16:28:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:10.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.689 16:28:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.689 16:28:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:10.689 16:28:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.689 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:10.689 Zero copy mechanism will not be used. 00:14:10.689 [2024-11-05 16:28:23.603374] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:14:10.690 [2024-11-05 16:28:23.603501] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75649 ] 00:14:10.690 [2024-11-05 16:28:23.770197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.948 [2024-11-05 16:28:23.912254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.220 [2024-11-05 16:28:24.129902] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:11.220 [2024-11-05 16:28:24.129955] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:11.478 16:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:11.478 16:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:14:11.478 16:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:11.478 16:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:11.478 16:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.478 16:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.738 BaseBdev1_malloc 00:14:11.738 16:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.738 16:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:11.738 16:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.738 16:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.738 [2024-11-05 16:28:24.610214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:11.738 [2024-11-05 16:28:24.610281] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.738 [2024-11-05 16:28:24.610307] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:11.738 [2024-11-05 16:28:24.610319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.738 [2024-11-05 16:28:24.612602] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.738 [2024-11-05 16:28:24.612640] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:11.738 BaseBdev1 00:14:11.738 16:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.738 16:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:11.738 16:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:11.738 16:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.738 16:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.738 BaseBdev2_malloc 00:14:11.738 16:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.738 16:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:11.738 16:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.738 16:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.738 [2024-11-05 16:28:24.661114] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:11.738 [2024-11-05 16:28:24.661206] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.738 [2024-11-05 16:28:24.661231] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:11.738 [2024-11-05 16:28:24.661244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.738 [2024-11-05 16:28:24.663537] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.738 [2024-11-05 16:28:24.663575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:11.738 BaseBdev2 00:14:11.738 16:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.738 16:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:11.738 16:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.738 16:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.738 spare_malloc 00:14:11.738 16:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.738 16:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:11.738 16:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.738 16:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.738 spare_delay 00:14:11.738 16:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.738 16:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:11.738 16:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.738 16:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.738 [2024-11-05 16:28:24.743212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:11.738 [2024-11-05 16:28:24.743286] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.738 [2024-11-05 16:28:24.743313] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:11.738 [2024-11-05 16:28:24.743326] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.738 [2024-11-05 16:28:24.745929] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.738 [2024-11-05 16:28:24.745957] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:11.738 spare 00:14:11.738 16:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.739 16:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:11.739 16:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.739 16:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.739 [2024-11-05 16:28:24.755222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:11.739 [2024-11-05 16:28:24.757294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:11.739 [2024-11-05 16:28:24.757409] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:11.739 [2024-11-05 16:28:24.757430] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:11.739 [2024-11-05 16:28:24.757775] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:11.739 [2024-11-05 16:28:24.758000] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:11.739 [2024-11-05 16:28:24.758019] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:11.739 [2024-11-05 16:28:24.758208] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.739 16:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.739 16:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:11.739 16:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:11.739 16:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.739 16:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:11.739 16:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:11.739 16:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:11.739 16:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.739 16:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.739 16:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.739 16:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.739 16:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.739 16:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.739 16:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.739 16:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.739 16:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.739 16:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.739 "name": "raid_bdev1", 00:14:11.739 "uuid": "3bc01302-6288-4a0d-bcb9-54b233952083", 00:14:11.739 "strip_size_kb": 0, 00:14:11.739 "state": "online", 00:14:11.739 "raid_level": "raid1", 00:14:11.739 "superblock": false, 00:14:11.739 "num_base_bdevs": 2, 00:14:11.739 "num_base_bdevs_discovered": 2, 00:14:11.739 "num_base_bdevs_operational": 2, 00:14:11.739 "base_bdevs_list": [ 00:14:11.739 { 00:14:11.739 "name": "BaseBdev1", 00:14:11.739 "uuid": "a55435f1-c0b3-5c69-a604-a276f8d13324", 00:14:11.739 "is_configured": true, 00:14:11.739 "data_offset": 0, 00:14:11.739 "data_size": 65536 00:14:11.739 }, 00:14:11.739 { 00:14:11.739 "name": "BaseBdev2", 00:14:11.739 "uuid": "68fff59a-0005-5857-8f41-15a0547b2cef", 00:14:11.739 "is_configured": true, 00:14:11.739 "data_offset": 0, 00:14:11.739 "data_size": 65536 00:14:11.739 } 00:14:11.739 ] 00:14:11.739 }' 00:14:11.739 16:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.739 16:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.307 16:28:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:12.307 16:28:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:12.307 16:28:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.307 16:28:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.307 [2024-11-05 16:28:25.218760] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:12.307 16:28:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.307 16:28:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:12.307 16:28:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.307 16:28:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:12.307 16:28:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.307 16:28:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.307 16:28:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.307 16:28:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:12.307 16:28:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:12.307 16:28:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:12.307 16:28:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:12.307 16:28:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:12.307 16:28:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:12.307 16:28:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:12.307 16:28:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:12.307 16:28:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:12.307 16:28:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:12.307 16:28:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:12.307 16:28:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:12.307 16:28:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:12.307 16:28:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:12.566 [2024-11-05 16:28:25.513975] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:12.566 /dev/nbd0 00:14:12.566 16:28:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:12.566 16:28:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:12.566 16:28:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:12.566 16:28:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:14:12.566 16:28:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:12.566 16:28:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:12.566 16:28:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:12.566 16:28:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:14:12.566 16:28:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:12.566 16:28:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:12.566 16:28:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:12.566 1+0 records in 00:14:12.566 1+0 records out 00:14:12.566 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426942 s, 9.6 MB/s 00:14:12.566 16:28:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.566 16:28:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:14:12.566 16:28:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.566 16:28:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:12.566 16:28:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:14:12.566 16:28:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:12.566 16:28:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:12.566 16:28:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:12.566 16:28:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:12.566 16:28:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:14:17.835 65536+0 records in 00:14:17.835 65536+0 records out 00:14:17.835 33554432 bytes (34 MB, 32 MiB) copied, 4.44031 s, 7.6 MB/s 00:14:17.835 16:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:17.835 16:28:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:17.836 16:28:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:17.836 16:28:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:17.836 16:28:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:17.836 16:28:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:17.836 16:28:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:17.836 [2024-11-05 16:28:30.255194] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:17.836 16:28:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:17.836 16:28:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:17.836 16:28:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:17.836 16:28:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:17.836 16:28:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:17.836 16:28:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:17.836 16:28:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:17.836 16:28:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:17.836 16:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:17.836 16:28:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.836 16:28:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.836 [2024-11-05 16:28:30.294350] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:17.836 16:28:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.836 16:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:17.836 16:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:17.836 16:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:17.836 16:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:17.836 16:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:17.836 16:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:17.836 16:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.836 16:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.836 16:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.836 16:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.836 16:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.836 16:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.836 16:28:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.836 16:28:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.836 16:28:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.836 16:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.836 "name": "raid_bdev1", 00:14:17.836 "uuid": "3bc01302-6288-4a0d-bcb9-54b233952083", 00:14:17.836 "strip_size_kb": 0, 00:14:17.836 "state": "online", 00:14:17.836 "raid_level": "raid1", 00:14:17.836 "superblock": false, 00:14:17.836 "num_base_bdevs": 2, 00:14:17.836 "num_base_bdevs_discovered": 1, 00:14:17.836 "num_base_bdevs_operational": 1, 00:14:17.836 "base_bdevs_list": [ 00:14:17.836 { 00:14:17.836 "name": null, 00:14:17.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.836 "is_configured": false, 00:14:17.836 "data_offset": 0, 00:14:17.836 "data_size": 65536 00:14:17.836 }, 00:14:17.836 { 00:14:17.836 "name": "BaseBdev2", 00:14:17.836 "uuid": "68fff59a-0005-5857-8f41-15a0547b2cef", 00:14:17.836 "is_configured": true, 00:14:17.836 "data_offset": 0, 00:14:17.836 "data_size": 65536 00:14:17.836 } 00:14:17.836 ] 00:14:17.836 }' 00:14:17.836 16:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.836 16:28:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.836 16:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:17.836 16:28:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.836 16:28:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.836 [2024-11-05 16:28:30.741648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:17.836 [2024-11-05 16:28:30.759934] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:14:17.836 16:28:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.836 16:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:17.836 [2024-11-05 16:28:30.761973] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:18.771 16:28:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:18.771 16:28:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.771 16:28:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:18.771 16:28:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:18.771 16:28:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.771 16:28:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.771 16:28:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.771 16:28:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.771 16:28:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.771 16:28:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.771 16:28:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.771 "name": "raid_bdev1", 00:14:18.771 "uuid": "3bc01302-6288-4a0d-bcb9-54b233952083", 00:14:18.771 "strip_size_kb": 0, 00:14:18.771 "state": "online", 00:14:18.771 "raid_level": "raid1", 00:14:18.771 "superblock": false, 00:14:18.771 "num_base_bdevs": 2, 00:14:18.771 "num_base_bdevs_discovered": 2, 00:14:18.771 "num_base_bdevs_operational": 2, 00:14:18.771 "process": { 00:14:18.771 "type": "rebuild", 00:14:18.771 "target": "spare", 00:14:18.771 "progress": { 00:14:18.771 "blocks": 20480, 00:14:18.771 "percent": 31 00:14:18.771 } 00:14:18.771 }, 00:14:18.771 "base_bdevs_list": [ 00:14:18.771 { 00:14:18.772 "name": "spare", 00:14:18.772 "uuid": "3ef6ff4a-f283-59fe-b2bf-04cc8c442994", 00:14:18.772 "is_configured": true, 00:14:18.772 "data_offset": 0, 00:14:18.772 "data_size": 65536 00:14:18.772 }, 00:14:18.772 { 00:14:18.772 "name": "BaseBdev2", 00:14:18.772 "uuid": "68fff59a-0005-5857-8f41-15a0547b2cef", 00:14:18.772 "is_configured": true, 00:14:18.772 "data_offset": 0, 00:14:18.772 "data_size": 65536 00:14:18.772 } 00:14:18.772 ] 00:14:18.772 }' 00:14:18.772 16:28:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.031 16:28:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:19.031 16:28:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.031 16:28:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:19.031 16:28:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:19.031 16:28:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.031 16:28:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.031 [2024-11-05 16:28:31.925794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:19.031 [2024-11-05 16:28:31.968183] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:19.031 [2024-11-05 16:28:31.968279] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:19.031 [2024-11-05 16:28:31.968294] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:19.031 [2024-11-05 16:28:31.968305] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:19.031 16:28:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.031 16:28:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:19.031 16:28:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:19.031 16:28:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:19.031 16:28:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:19.031 16:28:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:19.031 16:28:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:19.031 16:28:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.031 16:28:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.031 16:28:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.031 16:28:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.031 16:28:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.031 16:28:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.031 16:28:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.031 16:28:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.031 16:28:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.031 16:28:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.031 "name": "raid_bdev1", 00:14:19.031 "uuid": "3bc01302-6288-4a0d-bcb9-54b233952083", 00:14:19.031 "strip_size_kb": 0, 00:14:19.031 "state": "online", 00:14:19.031 "raid_level": "raid1", 00:14:19.031 "superblock": false, 00:14:19.031 "num_base_bdevs": 2, 00:14:19.031 "num_base_bdevs_discovered": 1, 00:14:19.032 "num_base_bdevs_operational": 1, 00:14:19.032 "base_bdevs_list": [ 00:14:19.032 { 00:14:19.032 "name": null, 00:14:19.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.032 "is_configured": false, 00:14:19.032 "data_offset": 0, 00:14:19.032 "data_size": 65536 00:14:19.032 }, 00:14:19.032 { 00:14:19.032 "name": "BaseBdev2", 00:14:19.032 "uuid": "68fff59a-0005-5857-8f41-15a0547b2cef", 00:14:19.032 "is_configured": true, 00:14:19.032 "data_offset": 0, 00:14:19.032 "data_size": 65536 00:14:19.032 } 00:14:19.032 ] 00:14:19.032 }' 00:14:19.032 16:28:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.032 16:28:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.601 16:28:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:19.601 16:28:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.601 16:28:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:19.601 16:28:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:19.601 16:28:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.601 16:28:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.601 16:28:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.601 16:28:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.601 16:28:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.601 16:28:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.601 16:28:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.601 "name": "raid_bdev1", 00:14:19.601 "uuid": "3bc01302-6288-4a0d-bcb9-54b233952083", 00:14:19.601 "strip_size_kb": 0, 00:14:19.601 "state": "online", 00:14:19.601 "raid_level": "raid1", 00:14:19.601 "superblock": false, 00:14:19.601 "num_base_bdevs": 2, 00:14:19.601 "num_base_bdevs_discovered": 1, 00:14:19.601 "num_base_bdevs_operational": 1, 00:14:19.601 "base_bdevs_list": [ 00:14:19.601 { 00:14:19.601 "name": null, 00:14:19.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.601 "is_configured": false, 00:14:19.601 "data_offset": 0, 00:14:19.601 "data_size": 65536 00:14:19.601 }, 00:14:19.601 { 00:14:19.601 "name": "BaseBdev2", 00:14:19.601 "uuid": "68fff59a-0005-5857-8f41-15a0547b2cef", 00:14:19.601 "is_configured": true, 00:14:19.601 "data_offset": 0, 00:14:19.601 "data_size": 65536 00:14:19.601 } 00:14:19.601 ] 00:14:19.601 }' 00:14:19.601 16:28:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.601 16:28:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:19.601 16:28:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.601 16:28:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:19.601 16:28:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:19.601 16:28:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.601 16:28:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.601 [2024-11-05 16:28:32.591439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:19.601 [2024-11-05 16:28:32.609647] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:14:19.601 16:28:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.601 16:28:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:19.601 [2024-11-05 16:28:32.611725] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:20.541 16:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:20.541 16:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.541 16:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:20.541 16:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:20.541 16:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.541 16:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.541 16:28:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.541 16:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.541 16:28:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.801 16:28:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.801 16:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.801 "name": "raid_bdev1", 00:14:20.801 "uuid": "3bc01302-6288-4a0d-bcb9-54b233952083", 00:14:20.801 "strip_size_kb": 0, 00:14:20.801 "state": "online", 00:14:20.801 "raid_level": "raid1", 00:14:20.801 "superblock": false, 00:14:20.801 "num_base_bdevs": 2, 00:14:20.801 "num_base_bdevs_discovered": 2, 00:14:20.801 "num_base_bdevs_operational": 2, 00:14:20.801 "process": { 00:14:20.801 "type": "rebuild", 00:14:20.801 "target": "spare", 00:14:20.801 "progress": { 00:14:20.801 "blocks": 20480, 00:14:20.801 "percent": 31 00:14:20.801 } 00:14:20.801 }, 00:14:20.801 "base_bdevs_list": [ 00:14:20.801 { 00:14:20.801 "name": "spare", 00:14:20.801 "uuid": "3ef6ff4a-f283-59fe-b2bf-04cc8c442994", 00:14:20.801 "is_configured": true, 00:14:20.801 "data_offset": 0, 00:14:20.801 "data_size": 65536 00:14:20.801 }, 00:14:20.801 { 00:14:20.801 "name": "BaseBdev2", 00:14:20.801 "uuid": "68fff59a-0005-5857-8f41-15a0547b2cef", 00:14:20.801 "is_configured": true, 00:14:20.801 "data_offset": 0, 00:14:20.801 "data_size": 65536 00:14:20.801 } 00:14:20.801 ] 00:14:20.801 }' 00:14:20.801 16:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.801 16:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:20.801 16:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.801 16:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:20.801 16:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:20.801 16:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:20.801 16:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:20.801 16:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:20.801 16:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=385 00:14:20.801 16:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:20.801 16:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:20.801 16:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.801 16:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:20.801 16:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:20.801 16:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.801 16:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.801 16:28:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.801 16:28:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.801 16:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.801 16:28:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.801 16:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.801 "name": "raid_bdev1", 00:14:20.801 "uuid": "3bc01302-6288-4a0d-bcb9-54b233952083", 00:14:20.801 "strip_size_kb": 0, 00:14:20.801 "state": "online", 00:14:20.801 "raid_level": "raid1", 00:14:20.801 "superblock": false, 00:14:20.801 "num_base_bdevs": 2, 00:14:20.801 "num_base_bdevs_discovered": 2, 00:14:20.801 "num_base_bdevs_operational": 2, 00:14:20.801 "process": { 00:14:20.801 "type": "rebuild", 00:14:20.801 "target": "spare", 00:14:20.801 "progress": { 00:14:20.801 "blocks": 22528, 00:14:20.801 "percent": 34 00:14:20.801 } 00:14:20.801 }, 00:14:20.801 "base_bdevs_list": [ 00:14:20.801 { 00:14:20.801 "name": "spare", 00:14:20.801 "uuid": "3ef6ff4a-f283-59fe-b2bf-04cc8c442994", 00:14:20.801 "is_configured": true, 00:14:20.801 "data_offset": 0, 00:14:20.801 "data_size": 65536 00:14:20.801 }, 00:14:20.801 { 00:14:20.801 "name": "BaseBdev2", 00:14:20.801 "uuid": "68fff59a-0005-5857-8f41-15a0547b2cef", 00:14:20.801 "is_configured": true, 00:14:20.801 "data_offset": 0, 00:14:20.801 "data_size": 65536 00:14:20.801 } 00:14:20.801 ] 00:14:20.801 }' 00:14:20.801 16:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.801 16:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:20.801 16:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.061 16:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:21.061 16:28:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:22.016 16:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:22.016 16:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.016 16:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.016 16:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.017 16:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.017 16:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.017 16:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.017 16:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.017 16:28:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.017 16:28:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.017 16:28:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.017 16:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.017 "name": "raid_bdev1", 00:14:22.017 "uuid": "3bc01302-6288-4a0d-bcb9-54b233952083", 00:14:22.017 "strip_size_kb": 0, 00:14:22.017 "state": "online", 00:14:22.017 "raid_level": "raid1", 00:14:22.017 "superblock": false, 00:14:22.017 "num_base_bdevs": 2, 00:14:22.017 "num_base_bdevs_discovered": 2, 00:14:22.017 "num_base_bdevs_operational": 2, 00:14:22.017 "process": { 00:14:22.017 "type": "rebuild", 00:14:22.017 "target": "spare", 00:14:22.017 "progress": { 00:14:22.017 "blocks": 45056, 00:14:22.017 "percent": 68 00:14:22.017 } 00:14:22.017 }, 00:14:22.017 "base_bdevs_list": [ 00:14:22.017 { 00:14:22.017 "name": "spare", 00:14:22.017 "uuid": "3ef6ff4a-f283-59fe-b2bf-04cc8c442994", 00:14:22.017 "is_configured": true, 00:14:22.017 "data_offset": 0, 00:14:22.017 "data_size": 65536 00:14:22.017 }, 00:14:22.017 { 00:14:22.017 "name": "BaseBdev2", 00:14:22.017 "uuid": "68fff59a-0005-5857-8f41-15a0547b2cef", 00:14:22.017 "is_configured": true, 00:14:22.017 "data_offset": 0, 00:14:22.017 "data_size": 65536 00:14:22.017 } 00:14:22.017 ] 00:14:22.017 }' 00:14:22.017 16:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.017 16:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:22.017 16:28:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.017 16:28:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:22.017 16:28:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:22.959 [2024-11-05 16:28:35.828423] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:22.959 [2024-11-05 16:28:35.828628] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:22.959 [2024-11-05 16:28:35.828696] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:23.217 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:23.217 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:23.217 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.217 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:23.217 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:23.217 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.217 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.217 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.217 16:28:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.217 16:28:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.217 16:28:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.217 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.217 "name": "raid_bdev1", 00:14:23.217 "uuid": "3bc01302-6288-4a0d-bcb9-54b233952083", 00:14:23.217 "strip_size_kb": 0, 00:14:23.217 "state": "online", 00:14:23.217 "raid_level": "raid1", 00:14:23.217 "superblock": false, 00:14:23.217 "num_base_bdevs": 2, 00:14:23.217 "num_base_bdevs_discovered": 2, 00:14:23.217 "num_base_bdevs_operational": 2, 00:14:23.217 "base_bdevs_list": [ 00:14:23.217 { 00:14:23.217 "name": "spare", 00:14:23.217 "uuid": "3ef6ff4a-f283-59fe-b2bf-04cc8c442994", 00:14:23.217 "is_configured": true, 00:14:23.217 "data_offset": 0, 00:14:23.217 "data_size": 65536 00:14:23.217 }, 00:14:23.217 { 00:14:23.217 "name": "BaseBdev2", 00:14:23.217 "uuid": "68fff59a-0005-5857-8f41-15a0547b2cef", 00:14:23.217 "is_configured": true, 00:14:23.217 "data_offset": 0, 00:14:23.217 "data_size": 65536 00:14:23.217 } 00:14:23.217 ] 00:14:23.217 }' 00:14:23.217 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.217 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:23.217 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.217 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:23.217 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:23.218 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:23.218 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.218 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:23.218 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:23.218 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.218 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.218 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.218 16:28:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.218 16:28:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.218 16:28:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.218 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.218 "name": "raid_bdev1", 00:14:23.218 "uuid": "3bc01302-6288-4a0d-bcb9-54b233952083", 00:14:23.218 "strip_size_kb": 0, 00:14:23.218 "state": "online", 00:14:23.218 "raid_level": "raid1", 00:14:23.218 "superblock": false, 00:14:23.218 "num_base_bdevs": 2, 00:14:23.218 "num_base_bdevs_discovered": 2, 00:14:23.218 "num_base_bdevs_operational": 2, 00:14:23.218 "base_bdevs_list": [ 00:14:23.218 { 00:14:23.218 "name": "spare", 00:14:23.218 "uuid": "3ef6ff4a-f283-59fe-b2bf-04cc8c442994", 00:14:23.218 "is_configured": true, 00:14:23.218 "data_offset": 0, 00:14:23.218 "data_size": 65536 00:14:23.218 }, 00:14:23.218 { 00:14:23.218 "name": "BaseBdev2", 00:14:23.218 "uuid": "68fff59a-0005-5857-8f41-15a0547b2cef", 00:14:23.218 "is_configured": true, 00:14:23.218 "data_offset": 0, 00:14:23.218 "data_size": 65536 00:14:23.218 } 00:14:23.218 ] 00:14:23.218 }' 00:14:23.218 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.218 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:23.218 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.476 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:23.476 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:23.476 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:23.476 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:23.476 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:23.476 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:23.476 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:23.476 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.476 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.476 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.477 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.477 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.477 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.477 16:28:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.477 16:28:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.477 16:28:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.477 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.477 "name": "raid_bdev1", 00:14:23.477 "uuid": "3bc01302-6288-4a0d-bcb9-54b233952083", 00:14:23.477 "strip_size_kb": 0, 00:14:23.477 "state": "online", 00:14:23.477 "raid_level": "raid1", 00:14:23.477 "superblock": false, 00:14:23.477 "num_base_bdevs": 2, 00:14:23.477 "num_base_bdevs_discovered": 2, 00:14:23.477 "num_base_bdevs_operational": 2, 00:14:23.477 "base_bdevs_list": [ 00:14:23.477 { 00:14:23.477 "name": "spare", 00:14:23.477 "uuid": "3ef6ff4a-f283-59fe-b2bf-04cc8c442994", 00:14:23.477 "is_configured": true, 00:14:23.477 "data_offset": 0, 00:14:23.477 "data_size": 65536 00:14:23.477 }, 00:14:23.477 { 00:14:23.477 "name": "BaseBdev2", 00:14:23.477 "uuid": "68fff59a-0005-5857-8f41-15a0547b2cef", 00:14:23.477 "is_configured": true, 00:14:23.477 "data_offset": 0, 00:14:23.477 "data_size": 65536 00:14:23.477 } 00:14:23.477 ] 00:14:23.477 }' 00:14:23.477 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.477 16:28:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.735 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:23.735 16:28:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.735 16:28:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.735 [2024-11-05 16:28:36.789033] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:23.735 [2024-11-05 16:28:36.789206] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:23.735 [2024-11-05 16:28:36.789355] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:23.735 [2024-11-05 16:28:36.789473] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:23.735 [2024-11-05 16:28:36.789557] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:23.735 16:28:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.735 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.735 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:23.735 16:28:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.735 16:28:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.735 16:28:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.995 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:23.995 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:23.995 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:23.995 16:28:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:23.995 16:28:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:23.995 16:28:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:23.995 16:28:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:23.995 16:28:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:23.995 16:28:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:23.995 16:28:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:23.995 16:28:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:23.995 16:28:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:23.995 16:28:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:23.995 /dev/nbd0 00:14:24.254 16:28:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:24.254 16:28:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:24.254 16:28:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:24.254 16:28:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:14:24.254 16:28:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:24.254 16:28:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:24.254 16:28:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:24.254 16:28:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:14:24.254 16:28:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:24.254 16:28:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:24.254 16:28:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:24.254 1+0 records in 00:14:24.254 1+0 records out 00:14:24.254 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421439 s, 9.7 MB/s 00:14:24.254 16:28:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:24.254 16:28:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:14:24.254 16:28:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:24.254 16:28:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:24.254 16:28:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:14:24.254 16:28:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:24.254 16:28:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:24.254 16:28:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:24.254 /dev/nbd1 00:14:24.254 16:28:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:24.513 16:28:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:24.513 16:28:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:14:24.513 16:28:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:14:24.513 16:28:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:24.513 16:28:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:24.513 16:28:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:14:24.513 16:28:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:14:24.514 16:28:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:24.514 16:28:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:24.514 16:28:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:24.514 1+0 records in 00:14:24.514 1+0 records out 00:14:24.514 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426135 s, 9.6 MB/s 00:14:24.514 16:28:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:24.514 16:28:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:14:24.514 16:28:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:24.514 16:28:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:24.514 16:28:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:14:24.514 16:28:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:24.514 16:28:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:24.514 16:28:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:24.514 16:28:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:24.514 16:28:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:24.514 16:28:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:24.514 16:28:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:24.514 16:28:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:24.514 16:28:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:24.514 16:28:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:24.772 16:28:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:24.772 16:28:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:24.772 16:28:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:24.772 16:28:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:24.772 16:28:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:24.772 16:28:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:24.772 16:28:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:24.772 16:28:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:24.772 16:28:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:24.772 16:28:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:25.031 16:28:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:25.031 16:28:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:25.031 16:28:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:25.031 16:28:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:25.031 16:28:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:25.031 16:28:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:25.031 16:28:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:25.031 16:28:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:25.031 16:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:25.031 16:28:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75649 00:14:25.031 16:28:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 75649 ']' 00:14:25.031 16:28:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 75649 00:14:25.031 16:28:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:14:25.031 16:28:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:25.031 16:28:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75649 00:14:25.290 killing process with pid 75649 00:14:25.290 Received shutdown signal, test time was about 60.000000 seconds 00:14:25.290 00:14:25.290 Latency(us) 00:14:25.290 [2024-11-05T16:28:38.378Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.290 [2024-11-05T16:28:38.378Z] =================================================================================================================== 00:14:25.290 [2024-11-05T16:28:38.378Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:25.290 16:28:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:25.290 16:28:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:25.291 16:28:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75649' 00:14:25.291 16:28:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # kill 75649 00:14:25.291 [2024-11-05 16:28:38.126752] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:25.291 16:28:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@976 -- # wait 75649 00:14:25.549 [2024-11-05 16:28:38.458815] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:26.927 16:28:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:26.927 00:14:26.927 real 0m16.170s 00:14:26.927 user 0m18.294s 00:14:26.927 sys 0m3.184s 00:14:26.927 16:28:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:26.927 16:28:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.927 ************************************ 00:14:26.927 END TEST raid_rebuild_test 00:14:26.927 ************************************ 00:14:26.927 16:28:39 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:14:26.927 16:28:39 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:14:26.927 16:28:39 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:26.927 16:28:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:26.927 ************************************ 00:14:26.927 START TEST raid_rebuild_test_sb 00:14:26.927 ************************************ 00:14:26.927 16:28:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:14:26.927 16:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:26.927 16:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:26.927 16:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:26.927 16:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:26.927 16:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:26.927 16:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:26.927 16:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:26.927 16:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:26.927 16:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:26.927 16:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:26.927 16:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:26.927 16:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:26.927 16:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:26.927 16:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:26.927 16:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:26.927 16:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:26.927 16:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:26.927 16:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:26.927 16:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:26.927 16:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:26.927 16:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:26.927 16:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:26.927 16:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:26.927 16:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:26.927 16:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=76078 00:14:26.927 16:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 76078 00:14:26.927 16:28:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 76078 ']' 00:14:26.927 16:28:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:26.927 16:28:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.927 16:28:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:26.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.927 16:28:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.927 16:28:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:26.927 16:28:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.927 [2024-11-05 16:28:39.855990] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:14:26.927 [2024-11-05 16:28:39.856115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76078 ] 00:14:26.927 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:26.927 Zero copy mechanism will not be used. 00:14:27.186 [2024-11-05 16:28:40.029858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.186 [2024-11-05 16:28:40.147103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.445 [2024-11-05 16:28:40.349966] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:27.445 [2024-11-05 16:28:40.350036] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:27.712 16:28:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:27.712 16:28:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:14:27.712 16:28:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:27.712 16:28:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:27.712 16:28:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.712 16:28:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.984 BaseBdev1_malloc 00:14:27.984 16:28:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.984 16:28:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:27.984 16:28:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.984 16:28:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.984 [2024-11-05 16:28:40.819297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:27.984 [2024-11-05 16:28:40.819360] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.984 [2024-11-05 16:28:40.819384] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:27.984 [2024-11-05 16:28:40.819395] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.984 [2024-11-05 16:28:40.821690] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.984 [2024-11-05 16:28:40.821729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:27.984 BaseBdev1 00:14:27.984 16:28:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.984 16:28:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:27.984 16:28:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:27.985 16:28:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.985 16:28:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.985 BaseBdev2_malloc 00:14:27.985 16:28:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.985 16:28:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:27.985 16:28:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.985 16:28:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.985 [2024-11-05 16:28:40.874413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:27.985 [2024-11-05 16:28:40.874470] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.985 [2024-11-05 16:28:40.874489] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:27.985 [2024-11-05 16:28:40.874501] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.985 [2024-11-05 16:28:40.876784] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.985 [2024-11-05 16:28:40.876821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:27.985 BaseBdev2 00:14:27.985 16:28:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.985 16:28:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:27.985 16:28:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.985 16:28:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.985 spare_malloc 00:14:27.985 16:28:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.985 16:28:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:27.985 16:28:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.985 16:28:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.985 spare_delay 00:14:27.985 16:28:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.985 16:28:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:27.985 16:28:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.985 16:28:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.985 [2024-11-05 16:28:40.953774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:27.985 [2024-11-05 16:28:40.953829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.985 [2024-11-05 16:28:40.953849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:27.985 [2024-11-05 16:28:40.953860] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.985 [2024-11-05 16:28:40.956158] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.985 [2024-11-05 16:28:40.956196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:27.985 spare 00:14:27.985 16:28:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.985 16:28:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:27.985 16:28:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.985 16:28:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.985 [2024-11-05 16:28:40.965837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:27.985 [2024-11-05 16:28:40.967787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:27.985 [2024-11-05 16:28:40.967999] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:27.985 [2024-11-05 16:28:40.968028] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:27.985 [2024-11-05 16:28:40.968322] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:27.985 [2024-11-05 16:28:40.968549] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:27.985 [2024-11-05 16:28:40.968569] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:27.985 [2024-11-05 16:28:40.968755] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:27.985 16:28:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.985 16:28:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:27.985 16:28:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.985 16:28:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.985 16:28:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:27.985 16:28:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:27.985 16:28:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:27.985 16:28:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.985 16:28:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.985 16:28:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.985 16:28:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.985 16:28:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.985 16:28:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.985 16:28:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.985 16:28:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.985 16:28:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.985 16:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.985 "name": "raid_bdev1", 00:14:27.985 "uuid": "cf67c355-78a3-4811-9117-4c4b19e8e2b0", 00:14:27.985 "strip_size_kb": 0, 00:14:27.985 "state": "online", 00:14:27.985 "raid_level": "raid1", 00:14:27.985 "superblock": true, 00:14:27.985 "num_base_bdevs": 2, 00:14:27.985 "num_base_bdevs_discovered": 2, 00:14:27.985 "num_base_bdevs_operational": 2, 00:14:27.985 "base_bdevs_list": [ 00:14:27.985 { 00:14:27.985 "name": "BaseBdev1", 00:14:27.985 "uuid": "1aa58016-5f23-5454-ac6b-593898d09630", 00:14:27.985 "is_configured": true, 00:14:27.985 "data_offset": 2048, 00:14:27.985 "data_size": 63488 00:14:27.985 }, 00:14:27.985 { 00:14:27.985 "name": "BaseBdev2", 00:14:27.985 "uuid": "9be03c6b-1999-52a8-90f2-244a8a1428fc", 00:14:27.985 "is_configured": true, 00:14:27.985 "data_offset": 2048, 00:14:27.985 "data_size": 63488 00:14:27.985 } 00:14:27.985 ] 00:14:27.985 }' 00:14:27.985 16:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.985 16:28:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.556 16:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:28.556 16:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:28.556 16:28:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.556 16:28:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.556 [2024-11-05 16:28:41.433383] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:28.556 16:28:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.556 16:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:28.556 16:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.556 16:28:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.556 16:28:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.556 16:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:28.556 16:28:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.556 16:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:28.556 16:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:28.556 16:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:28.556 16:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:28.556 16:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:28.556 16:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:28.556 16:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:28.556 16:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:28.556 16:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:28.556 16:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:28.556 16:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:28.556 16:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:28.556 16:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:28.556 16:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:28.816 [2024-11-05 16:28:41.732785] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:28.816 /dev/nbd0 00:14:28.816 16:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:28.816 16:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:28.816 16:28:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:28.816 16:28:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:14:28.816 16:28:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:28.816 16:28:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:28.816 16:28:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:28.816 16:28:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:14:28.816 16:28:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:28.816 16:28:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:28.816 16:28:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:28.816 1+0 records in 00:14:28.816 1+0 records out 00:14:28.816 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038708 s, 10.6 MB/s 00:14:28.816 16:28:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:28.816 16:28:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:14:28.816 16:28:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:28.816 16:28:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:28.816 16:28:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:14:28.816 16:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:28.816 16:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:28.816 16:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:28.816 16:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:28.816 16:28:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:34.093 63488+0 records in 00:14:34.093 63488+0 records out 00:14:34.093 32505856 bytes (33 MB, 31 MiB) copied, 4.72826 s, 6.9 MB/s 00:14:34.093 16:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:34.093 16:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:34.093 16:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:34.093 16:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:34.093 16:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:34.093 16:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:34.093 16:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:34.093 [2024-11-05 16:28:46.767389] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.093 16:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:34.093 16:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:34.093 16:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:34.093 16:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:34.093 16:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:34.093 16:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:34.093 16:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:34.093 16:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:34.093 16:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:34.093 16:28:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.093 16:28:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.093 [2024-11-05 16:28:46.819421] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:34.093 16:28:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.093 16:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:34.093 16:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:34.093 16:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.093 16:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:34.093 16:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:34.093 16:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:34.093 16:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.093 16:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.093 16:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.093 16:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.093 16:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.093 16:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.093 16:28:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.093 16:28:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.093 16:28:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.093 16:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.093 "name": "raid_bdev1", 00:14:34.093 "uuid": "cf67c355-78a3-4811-9117-4c4b19e8e2b0", 00:14:34.093 "strip_size_kb": 0, 00:14:34.093 "state": "online", 00:14:34.093 "raid_level": "raid1", 00:14:34.093 "superblock": true, 00:14:34.093 "num_base_bdevs": 2, 00:14:34.093 "num_base_bdevs_discovered": 1, 00:14:34.093 "num_base_bdevs_operational": 1, 00:14:34.093 "base_bdevs_list": [ 00:14:34.093 { 00:14:34.093 "name": null, 00:14:34.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.093 "is_configured": false, 00:14:34.093 "data_offset": 0, 00:14:34.093 "data_size": 63488 00:14:34.093 }, 00:14:34.093 { 00:14:34.093 "name": "BaseBdev2", 00:14:34.093 "uuid": "9be03c6b-1999-52a8-90f2-244a8a1428fc", 00:14:34.093 "is_configured": true, 00:14:34.093 "data_offset": 2048, 00:14:34.093 "data_size": 63488 00:14:34.093 } 00:14:34.093 ] 00:14:34.093 }' 00:14:34.093 16:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.093 16:28:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.353 16:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:34.353 16:28:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.353 16:28:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.353 [2024-11-05 16:28:47.334620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:34.353 [2024-11-05 16:28:47.355114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:14:34.353 16:28:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.353 [2024-11-05 16:28:47.357334] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:34.353 16:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:35.301 16:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:35.301 16:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.301 16:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:35.301 16:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:35.301 16:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.301 16:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.301 16:28:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.301 16:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.301 16:28:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.301 16:28:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.561 16:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.561 "name": "raid_bdev1", 00:14:35.561 "uuid": "cf67c355-78a3-4811-9117-4c4b19e8e2b0", 00:14:35.561 "strip_size_kb": 0, 00:14:35.561 "state": "online", 00:14:35.561 "raid_level": "raid1", 00:14:35.561 "superblock": true, 00:14:35.561 "num_base_bdevs": 2, 00:14:35.561 "num_base_bdevs_discovered": 2, 00:14:35.561 "num_base_bdevs_operational": 2, 00:14:35.561 "process": { 00:14:35.561 "type": "rebuild", 00:14:35.561 "target": "spare", 00:14:35.561 "progress": { 00:14:35.561 "blocks": 20480, 00:14:35.561 "percent": 32 00:14:35.561 } 00:14:35.561 }, 00:14:35.561 "base_bdevs_list": [ 00:14:35.561 { 00:14:35.561 "name": "spare", 00:14:35.561 "uuid": "2fe2b440-8dd8-51a5-8c82-e4680f7a547b", 00:14:35.561 "is_configured": true, 00:14:35.561 "data_offset": 2048, 00:14:35.561 "data_size": 63488 00:14:35.561 }, 00:14:35.561 { 00:14:35.561 "name": "BaseBdev2", 00:14:35.561 "uuid": "9be03c6b-1999-52a8-90f2-244a8a1428fc", 00:14:35.561 "is_configured": true, 00:14:35.561 "data_offset": 2048, 00:14:35.561 "data_size": 63488 00:14:35.561 } 00:14:35.561 ] 00:14:35.561 }' 00:14:35.561 16:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.561 16:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:35.561 16:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.561 16:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:35.561 16:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:35.561 16:28:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.561 16:28:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.561 [2024-11-05 16:28:48.524716] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:35.561 [2024-11-05 16:28:48.564059] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:35.561 [2024-11-05 16:28:48.564178] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.561 [2024-11-05 16:28:48.564200] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:35.561 [2024-11-05 16:28:48.564218] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:35.561 16:28:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.561 16:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:35.561 16:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:35.561 16:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:35.561 16:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:35.561 16:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:35.561 16:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:35.561 16:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.561 16:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.561 16:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.561 16:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.561 16:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.561 16:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.561 16:28:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.561 16:28:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.561 16:28:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.820 16:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.820 "name": "raid_bdev1", 00:14:35.820 "uuid": "cf67c355-78a3-4811-9117-4c4b19e8e2b0", 00:14:35.820 "strip_size_kb": 0, 00:14:35.820 "state": "online", 00:14:35.820 "raid_level": "raid1", 00:14:35.820 "superblock": true, 00:14:35.820 "num_base_bdevs": 2, 00:14:35.820 "num_base_bdevs_discovered": 1, 00:14:35.820 "num_base_bdevs_operational": 1, 00:14:35.820 "base_bdevs_list": [ 00:14:35.820 { 00:14:35.820 "name": null, 00:14:35.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.820 "is_configured": false, 00:14:35.820 "data_offset": 0, 00:14:35.820 "data_size": 63488 00:14:35.820 }, 00:14:35.820 { 00:14:35.820 "name": "BaseBdev2", 00:14:35.820 "uuid": "9be03c6b-1999-52a8-90f2-244a8a1428fc", 00:14:35.820 "is_configured": true, 00:14:35.820 "data_offset": 2048, 00:14:35.820 "data_size": 63488 00:14:35.820 } 00:14:35.820 ] 00:14:35.820 }' 00:14:35.820 16:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.820 16:28:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.079 16:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:36.079 16:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:36.079 16:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:36.079 16:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:36.079 16:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:36.079 16:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.079 16:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.080 16:28:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.080 16:28:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.080 16:28:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.080 16:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:36.080 "name": "raid_bdev1", 00:14:36.080 "uuid": "cf67c355-78a3-4811-9117-4c4b19e8e2b0", 00:14:36.080 "strip_size_kb": 0, 00:14:36.080 "state": "online", 00:14:36.080 "raid_level": "raid1", 00:14:36.080 "superblock": true, 00:14:36.080 "num_base_bdevs": 2, 00:14:36.080 "num_base_bdevs_discovered": 1, 00:14:36.080 "num_base_bdevs_operational": 1, 00:14:36.080 "base_bdevs_list": [ 00:14:36.080 { 00:14:36.080 "name": null, 00:14:36.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.080 "is_configured": false, 00:14:36.080 "data_offset": 0, 00:14:36.080 "data_size": 63488 00:14:36.080 }, 00:14:36.080 { 00:14:36.080 "name": "BaseBdev2", 00:14:36.080 "uuid": "9be03c6b-1999-52a8-90f2-244a8a1428fc", 00:14:36.080 "is_configured": true, 00:14:36.080 "data_offset": 2048, 00:14:36.080 "data_size": 63488 00:14:36.080 } 00:14:36.080 ] 00:14:36.080 }' 00:14:36.080 16:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:36.339 16:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:36.339 16:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:36.339 16:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:36.339 16:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:36.339 16:28:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.339 16:28:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.339 [2024-11-05 16:28:49.230314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:36.339 [2024-11-05 16:28:49.248125] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:14:36.339 16:28:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.339 16:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:36.339 [2024-11-05 16:28:49.250217] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:37.275 16:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:37.275 16:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.275 16:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:37.275 16:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:37.275 16:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.275 16:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.275 16:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.275 16:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.275 16:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.275 16:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.275 16:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.275 "name": "raid_bdev1", 00:14:37.275 "uuid": "cf67c355-78a3-4811-9117-4c4b19e8e2b0", 00:14:37.276 "strip_size_kb": 0, 00:14:37.276 "state": "online", 00:14:37.276 "raid_level": "raid1", 00:14:37.276 "superblock": true, 00:14:37.276 "num_base_bdevs": 2, 00:14:37.276 "num_base_bdevs_discovered": 2, 00:14:37.276 "num_base_bdevs_operational": 2, 00:14:37.276 "process": { 00:14:37.276 "type": "rebuild", 00:14:37.276 "target": "spare", 00:14:37.276 "progress": { 00:14:37.276 "blocks": 20480, 00:14:37.276 "percent": 32 00:14:37.276 } 00:14:37.276 }, 00:14:37.276 "base_bdevs_list": [ 00:14:37.276 { 00:14:37.276 "name": "spare", 00:14:37.276 "uuid": "2fe2b440-8dd8-51a5-8c82-e4680f7a547b", 00:14:37.276 "is_configured": true, 00:14:37.276 "data_offset": 2048, 00:14:37.276 "data_size": 63488 00:14:37.276 }, 00:14:37.276 { 00:14:37.276 "name": "BaseBdev2", 00:14:37.276 "uuid": "9be03c6b-1999-52a8-90f2-244a8a1428fc", 00:14:37.276 "is_configured": true, 00:14:37.276 "data_offset": 2048, 00:14:37.276 "data_size": 63488 00:14:37.276 } 00:14:37.276 ] 00:14:37.276 }' 00:14:37.276 16:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.276 16:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:37.276 16:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.535 16:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:37.535 16:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:37.535 16:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:37.535 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:37.535 16:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:37.535 16:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:37.535 16:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:37.535 16:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=402 00:14:37.535 16:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:37.535 16:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:37.535 16:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.535 16:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:37.535 16:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:37.535 16:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.535 16:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.536 16:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.536 16:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.536 16:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.536 16:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.536 16:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.536 "name": "raid_bdev1", 00:14:37.536 "uuid": "cf67c355-78a3-4811-9117-4c4b19e8e2b0", 00:14:37.536 "strip_size_kb": 0, 00:14:37.536 "state": "online", 00:14:37.536 "raid_level": "raid1", 00:14:37.536 "superblock": true, 00:14:37.536 "num_base_bdevs": 2, 00:14:37.536 "num_base_bdevs_discovered": 2, 00:14:37.536 "num_base_bdevs_operational": 2, 00:14:37.536 "process": { 00:14:37.536 "type": "rebuild", 00:14:37.536 "target": "spare", 00:14:37.536 "progress": { 00:14:37.536 "blocks": 22528, 00:14:37.536 "percent": 35 00:14:37.536 } 00:14:37.536 }, 00:14:37.536 "base_bdevs_list": [ 00:14:37.536 { 00:14:37.536 "name": "spare", 00:14:37.536 "uuid": "2fe2b440-8dd8-51a5-8c82-e4680f7a547b", 00:14:37.536 "is_configured": true, 00:14:37.536 "data_offset": 2048, 00:14:37.536 "data_size": 63488 00:14:37.536 }, 00:14:37.536 { 00:14:37.536 "name": "BaseBdev2", 00:14:37.536 "uuid": "9be03c6b-1999-52a8-90f2-244a8a1428fc", 00:14:37.536 "is_configured": true, 00:14:37.536 "data_offset": 2048, 00:14:37.536 "data_size": 63488 00:14:37.536 } 00:14:37.536 ] 00:14:37.536 }' 00:14:37.536 16:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.536 16:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:37.536 16:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.536 16:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:37.536 16:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:38.479 16:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:38.479 16:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:38.479 16:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.479 16:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:38.479 16:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:38.479 16:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.479 16:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.479 16:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.479 16:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.479 16:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.479 16:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.744 16:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.744 "name": "raid_bdev1", 00:14:38.744 "uuid": "cf67c355-78a3-4811-9117-4c4b19e8e2b0", 00:14:38.744 "strip_size_kb": 0, 00:14:38.744 "state": "online", 00:14:38.744 "raid_level": "raid1", 00:14:38.744 "superblock": true, 00:14:38.744 "num_base_bdevs": 2, 00:14:38.744 "num_base_bdevs_discovered": 2, 00:14:38.744 "num_base_bdevs_operational": 2, 00:14:38.744 "process": { 00:14:38.744 "type": "rebuild", 00:14:38.744 "target": "spare", 00:14:38.744 "progress": { 00:14:38.744 "blocks": 45056, 00:14:38.744 "percent": 70 00:14:38.744 } 00:14:38.744 }, 00:14:38.744 "base_bdevs_list": [ 00:14:38.744 { 00:14:38.744 "name": "spare", 00:14:38.744 "uuid": "2fe2b440-8dd8-51a5-8c82-e4680f7a547b", 00:14:38.744 "is_configured": true, 00:14:38.744 "data_offset": 2048, 00:14:38.744 "data_size": 63488 00:14:38.744 }, 00:14:38.744 { 00:14:38.744 "name": "BaseBdev2", 00:14:38.744 "uuid": "9be03c6b-1999-52a8-90f2-244a8a1428fc", 00:14:38.744 "is_configured": true, 00:14:38.744 "data_offset": 2048, 00:14:38.744 "data_size": 63488 00:14:38.744 } 00:14:38.744 ] 00:14:38.744 }' 00:14:38.744 16:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.744 16:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:38.744 16:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.744 16:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:38.744 16:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:39.313 [2024-11-05 16:28:52.365953] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:39.313 [2024-11-05 16:28:52.366062] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:39.313 [2024-11-05 16:28:52.366198] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.882 16:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:39.882 16:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:39.882 16:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.882 16:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:39.882 16:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:39.882 16:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.882 16:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.882 16:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.882 16:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.882 16:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.882 16:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.882 16:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.882 "name": "raid_bdev1", 00:14:39.882 "uuid": "cf67c355-78a3-4811-9117-4c4b19e8e2b0", 00:14:39.882 "strip_size_kb": 0, 00:14:39.882 "state": "online", 00:14:39.882 "raid_level": "raid1", 00:14:39.882 "superblock": true, 00:14:39.882 "num_base_bdevs": 2, 00:14:39.882 "num_base_bdevs_discovered": 2, 00:14:39.882 "num_base_bdevs_operational": 2, 00:14:39.882 "base_bdevs_list": [ 00:14:39.882 { 00:14:39.882 "name": "spare", 00:14:39.882 "uuid": "2fe2b440-8dd8-51a5-8c82-e4680f7a547b", 00:14:39.882 "is_configured": true, 00:14:39.882 "data_offset": 2048, 00:14:39.882 "data_size": 63488 00:14:39.882 }, 00:14:39.882 { 00:14:39.882 "name": "BaseBdev2", 00:14:39.882 "uuid": "9be03c6b-1999-52a8-90f2-244a8a1428fc", 00:14:39.882 "is_configured": true, 00:14:39.882 "data_offset": 2048, 00:14:39.882 "data_size": 63488 00:14:39.882 } 00:14:39.882 ] 00:14:39.882 }' 00:14:39.882 16:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.882 16:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:39.882 16:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.882 16:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:39.882 16:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:39.882 16:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:39.882 16:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.882 16:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:39.882 16:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:39.882 16:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.882 16:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.882 16:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.882 16:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.882 16:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.882 16:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.882 16:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.882 "name": "raid_bdev1", 00:14:39.882 "uuid": "cf67c355-78a3-4811-9117-4c4b19e8e2b0", 00:14:39.882 "strip_size_kb": 0, 00:14:39.882 "state": "online", 00:14:39.882 "raid_level": "raid1", 00:14:39.882 "superblock": true, 00:14:39.882 "num_base_bdevs": 2, 00:14:39.882 "num_base_bdevs_discovered": 2, 00:14:39.882 "num_base_bdevs_operational": 2, 00:14:39.882 "base_bdevs_list": [ 00:14:39.882 { 00:14:39.882 "name": "spare", 00:14:39.882 "uuid": "2fe2b440-8dd8-51a5-8c82-e4680f7a547b", 00:14:39.882 "is_configured": true, 00:14:39.882 "data_offset": 2048, 00:14:39.882 "data_size": 63488 00:14:39.882 }, 00:14:39.882 { 00:14:39.882 "name": "BaseBdev2", 00:14:39.882 "uuid": "9be03c6b-1999-52a8-90f2-244a8a1428fc", 00:14:39.882 "is_configured": true, 00:14:39.882 "data_offset": 2048, 00:14:39.882 "data_size": 63488 00:14:39.882 } 00:14:39.882 ] 00:14:39.882 }' 00:14:39.882 16:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.882 16:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:39.882 16:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.142 16:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:40.142 16:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:40.142 16:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.142 16:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.142 16:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:40.142 16:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:40.142 16:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:40.142 16:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.142 16:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.142 16:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.142 16:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.142 16:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.142 16:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.142 16:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.142 16:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.142 16:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.142 16:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.142 "name": "raid_bdev1", 00:14:40.142 "uuid": "cf67c355-78a3-4811-9117-4c4b19e8e2b0", 00:14:40.142 "strip_size_kb": 0, 00:14:40.142 "state": "online", 00:14:40.142 "raid_level": "raid1", 00:14:40.142 "superblock": true, 00:14:40.142 "num_base_bdevs": 2, 00:14:40.142 "num_base_bdevs_discovered": 2, 00:14:40.142 "num_base_bdevs_operational": 2, 00:14:40.142 "base_bdevs_list": [ 00:14:40.142 { 00:14:40.142 "name": "spare", 00:14:40.142 "uuid": "2fe2b440-8dd8-51a5-8c82-e4680f7a547b", 00:14:40.142 "is_configured": true, 00:14:40.142 "data_offset": 2048, 00:14:40.142 "data_size": 63488 00:14:40.142 }, 00:14:40.142 { 00:14:40.142 "name": "BaseBdev2", 00:14:40.142 "uuid": "9be03c6b-1999-52a8-90f2-244a8a1428fc", 00:14:40.142 "is_configured": true, 00:14:40.142 "data_offset": 2048, 00:14:40.142 "data_size": 63488 00:14:40.142 } 00:14:40.142 ] 00:14:40.142 }' 00:14:40.142 16:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.142 16:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.400 16:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:40.400 16:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.400 16:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.400 [2024-11-05 16:28:53.461178] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:40.400 [2024-11-05 16:28:53.461233] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:40.400 [2024-11-05 16:28:53.461333] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:40.400 [2024-11-05 16:28:53.461412] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:40.400 [2024-11-05 16:28:53.461425] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:40.400 16:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.400 16:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.400 16:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:40.400 16:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.400 16:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.400 16:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.659 16:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:40.659 16:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:40.659 16:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:40.659 16:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:40.659 16:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:40.659 16:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:40.659 16:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:40.659 16:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:40.659 16:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:40.659 16:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:40.659 16:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:40.659 16:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:40.659 16:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:40.918 /dev/nbd0 00:14:40.918 16:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:40.918 16:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:40.918 16:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:40.918 16:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:14:40.918 16:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:40.918 16:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:40.918 16:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:40.918 16:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:14:40.918 16:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:40.918 16:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:40.918 16:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:40.918 1+0 records in 00:14:40.918 1+0 records out 00:14:40.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000485672 s, 8.4 MB/s 00:14:40.918 16:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:40.918 16:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:14:40.918 16:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:40.918 16:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:40.918 16:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:14:40.918 16:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:40.918 16:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:40.918 16:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:41.178 /dev/nbd1 00:14:41.178 16:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:41.178 16:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:41.178 16:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:14:41.178 16:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:14:41.178 16:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:41.178 16:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:41.178 16:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:14:41.178 16:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:14:41.178 16:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:41.178 16:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:41.178 16:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:41.178 1+0 records in 00:14:41.178 1+0 records out 00:14:41.178 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461611 s, 8.9 MB/s 00:14:41.178 16:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:41.178 16:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:14:41.178 16:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:41.178 16:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:41.178 16:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:14:41.178 16:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:41.178 16:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:41.178 16:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:41.438 16:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:41.438 16:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:41.438 16:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:41.438 16:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:41.438 16:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:41.438 16:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:41.438 16:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:41.697 16:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:41.697 16:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:41.697 16:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:41.697 16:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:41.697 16:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:41.697 16:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:41.697 16:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:41.697 16:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:41.697 16:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:41.697 16:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:41.957 16:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:41.957 16:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:41.957 16:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:41.957 16:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:41.957 16:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:41.957 16:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:41.957 16:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:41.957 16:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:41.957 16:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:41.957 16:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:41.957 16:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.957 16:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.957 16:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.957 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:41.957 16:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.957 16:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.957 [2024-11-05 16:28:55.018872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:41.957 [2024-11-05 16:28:55.018969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.957 [2024-11-05 16:28:55.018999] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:41.957 [2024-11-05 16:28:55.019012] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.957 [2024-11-05 16:28:55.021585] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.957 [2024-11-05 16:28:55.021629] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:41.957 [2024-11-05 16:28:55.021747] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:41.957 [2024-11-05 16:28:55.021808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:41.957 [2024-11-05 16:28:55.021994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:41.957 spare 00:14:41.957 16:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.957 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:41.957 16:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.957 16:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.247 [2024-11-05 16:28:55.121918] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:42.247 [2024-11-05 16:28:55.121986] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:42.247 [2024-11-05 16:28:55.122395] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:14:42.247 [2024-11-05 16:28:55.122673] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:42.247 [2024-11-05 16:28:55.122695] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:42.247 [2024-11-05 16:28:55.122948] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.247 16:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.247 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:42.247 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.247 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.247 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:42.247 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:42.247 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:42.247 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.247 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.247 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.247 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.247 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.247 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.247 16:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.247 16:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.247 16:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.247 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.247 "name": "raid_bdev1", 00:14:42.247 "uuid": "cf67c355-78a3-4811-9117-4c4b19e8e2b0", 00:14:42.247 "strip_size_kb": 0, 00:14:42.247 "state": "online", 00:14:42.247 "raid_level": "raid1", 00:14:42.247 "superblock": true, 00:14:42.247 "num_base_bdevs": 2, 00:14:42.247 "num_base_bdevs_discovered": 2, 00:14:42.247 "num_base_bdevs_operational": 2, 00:14:42.247 "base_bdevs_list": [ 00:14:42.247 { 00:14:42.247 "name": "spare", 00:14:42.247 "uuid": "2fe2b440-8dd8-51a5-8c82-e4680f7a547b", 00:14:42.247 "is_configured": true, 00:14:42.247 "data_offset": 2048, 00:14:42.247 "data_size": 63488 00:14:42.247 }, 00:14:42.247 { 00:14:42.247 "name": "BaseBdev2", 00:14:42.247 "uuid": "9be03c6b-1999-52a8-90f2-244a8a1428fc", 00:14:42.247 "is_configured": true, 00:14:42.247 "data_offset": 2048, 00:14:42.247 "data_size": 63488 00:14:42.247 } 00:14:42.247 ] 00:14:42.247 }' 00:14:42.247 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.247 16:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.509 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:42.509 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.509 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:42.509 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:42.509 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.509 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.509 16:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.509 16:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.509 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.768 16:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.768 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.768 "name": "raid_bdev1", 00:14:42.768 "uuid": "cf67c355-78a3-4811-9117-4c4b19e8e2b0", 00:14:42.768 "strip_size_kb": 0, 00:14:42.768 "state": "online", 00:14:42.768 "raid_level": "raid1", 00:14:42.768 "superblock": true, 00:14:42.768 "num_base_bdevs": 2, 00:14:42.768 "num_base_bdevs_discovered": 2, 00:14:42.768 "num_base_bdevs_operational": 2, 00:14:42.768 "base_bdevs_list": [ 00:14:42.768 { 00:14:42.768 "name": "spare", 00:14:42.768 "uuid": "2fe2b440-8dd8-51a5-8c82-e4680f7a547b", 00:14:42.768 "is_configured": true, 00:14:42.768 "data_offset": 2048, 00:14:42.768 "data_size": 63488 00:14:42.768 }, 00:14:42.768 { 00:14:42.768 "name": "BaseBdev2", 00:14:42.768 "uuid": "9be03c6b-1999-52a8-90f2-244a8a1428fc", 00:14:42.768 "is_configured": true, 00:14:42.768 "data_offset": 2048, 00:14:42.768 "data_size": 63488 00:14:42.768 } 00:14:42.768 ] 00:14:42.768 }' 00:14:42.768 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.768 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:42.768 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.769 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:42.769 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.769 16:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.769 16:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.769 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:42.769 16:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.769 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:42.769 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:42.769 16:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.769 16:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.769 [2024-11-05 16:28:55.797842] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:42.769 16:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.769 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:42.769 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.769 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.769 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:42.769 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:42.769 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:42.769 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.769 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.769 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.769 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.769 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.769 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.769 16:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.769 16:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.769 16:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.769 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.769 "name": "raid_bdev1", 00:14:42.769 "uuid": "cf67c355-78a3-4811-9117-4c4b19e8e2b0", 00:14:42.769 "strip_size_kb": 0, 00:14:42.769 "state": "online", 00:14:42.769 "raid_level": "raid1", 00:14:42.769 "superblock": true, 00:14:42.769 "num_base_bdevs": 2, 00:14:42.769 "num_base_bdevs_discovered": 1, 00:14:42.769 "num_base_bdevs_operational": 1, 00:14:42.769 "base_bdevs_list": [ 00:14:42.769 { 00:14:42.769 "name": null, 00:14:42.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.769 "is_configured": false, 00:14:42.769 "data_offset": 0, 00:14:42.769 "data_size": 63488 00:14:42.769 }, 00:14:42.769 { 00:14:42.769 "name": "BaseBdev2", 00:14:42.769 "uuid": "9be03c6b-1999-52a8-90f2-244a8a1428fc", 00:14:42.769 "is_configured": true, 00:14:42.769 "data_offset": 2048, 00:14:42.769 "data_size": 63488 00:14:42.769 } 00:14:42.769 ] 00:14:42.769 }' 00:14:42.769 16:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.769 16:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.338 16:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:43.338 16:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.338 16:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.338 [2024-11-05 16:28:56.304979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:43.338 [2024-11-05 16:28:56.305206] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:43.338 [2024-11-05 16:28:56.305225] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:43.338 [2024-11-05 16:28:56.305271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:43.338 [2024-11-05 16:28:56.324132] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:14:43.338 16:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.338 16:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:43.338 [2024-11-05 16:28:56.326363] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:44.276 16:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:44.276 16:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.276 16:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:44.276 16:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:44.276 16:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.276 16:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.276 16:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.276 16:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.276 16:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.276 16:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.535 16:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.535 "name": "raid_bdev1", 00:14:44.535 "uuid": "cf67c355-78a3-4811-9117-4c4b19e8e2b0", 00:14:44.535 "strip_size_kb": 0, 00:14:44.535 "state": "online", 00:14:44.535 "raid_level": "raid1", 00:14:44.535 "superblock": true, 00:14:44.535 "num_base_bdevs": 2, 00:14:44.535 "num_base_bdevs_discovered": 2, 00:14:44.535 "num_base_bdevs_operational": 2, 00:14:44.535 "process": { 00:14:44.535 "type": "rebuild", 00:14:44.535 "target": "spare", 00:14:44.535 "progress": { 00:14:44.535 "blocks": 20480, 00:14:44.535 "percent": 32 00:14:44.535 } 00:14:44.535 }, 00:14:44.535 "base_bdevs_list": [ 00:14:44.535 { 00:14:44.535 "name": "spare", 00:14:44.535 "uuid": "2fe2b440-8dd8-51a5-8c82-e4680f7a547b", 00:14:44.535 "is_configured": true, 00:14:44.536 "data_offset": 2048, 00:14:44.536 "data_size": 63488 00:14:44.536 }, 00:14:44.536 { 00:14:44.536 "name": "BaseBdev2", 00:14:44.536 "uuid": "9be03c6b-1999-52a8-90f2-244a8a1428fc", 00:14:44.536 "is_configured": true, 00:14:44.536 "data_offset": 2048, 00:14:44.536 "data_size": 63488 00:14:44.536 } 00:14:44.536 ] 00:14:44.536 }' 00:14:44.536 16:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.536 16:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:44.536 16:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.536 16:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:44.536 16:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:44.536 16:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.536 16:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.536 [2024-11-05 16:28:57.474149] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:44.536 [2024-11-05 16:28:57.532482] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:44.536 [2024-11-05 16:28:57.532635] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.536 [2024-11-05 16:28:57.532659] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:44.536 [2024-11-05 16:28:57.532671] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:44.536 16:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.536 16:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:44.536 16:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.536 16:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.536 16:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:44.536 16:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:44.536 16:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:44.536 16:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.536 16:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.536 16:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.536 16:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.536 16:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.536 16:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.536 16:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.536 16:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.536 16:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.795 16:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.795 "name": "raid_bdev1", 00:14:44.795 "uuid": "cf67c355-78a3-4811-9117-4c4b19e8e2b0", 00:14:44.795 "strip_size_kb": 0, 00:14:44.795 "state": "online", 00:14:44.795 "raid_level": "raid1", 00:14:44.795 "superblock": true, 00:14:44.795 "num_base_bdevs": 2, 00:14:44.795 "num_base_bdevs_discovered": 1, 00:14:44.795 "num_base_bdevs_operational": 1, 00:14:44.795 "base_bdevs_list": [ 00:14:44.795 { 00:14:44.795 "name": null, 00:14:44.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.795 "is_configured": false, 00:14:44.795 "data_offset": 0, 00:14:44.795 "data_size": 63488 00:14:44.795 }, 00:14:44.795 { 00:14:44.795 "name": "BaseBdev2", 00:14:44.795 "uuid": "9be03c6b-1999-52a8-90f2-244a8a1428fc", 00:14:44.795 "is_configured": true, 00:14:44.795 "data_offset": 2048, 00:14:44.795 "data_size": 63488 00:14:44.795 } 00:14:44.795 ] 00:14:44.795 }' 00:14:44.795 16:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.795 16:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.054 16:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:45.054 16:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.054 16:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.054 [2024-11-05 16:28:58.032833] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:45.054 [2024-11-05 16:28:58.032918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.054 [2024-11-05 16:28:58.032943] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:45.054 [2024-11-05 16:28:58.032957] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.054 [2024-11-05 16:28:58.033503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.054 [2024-11-05 16:28:58.033549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:45.054 [2024-11-05 16:28:58.033667] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:45.054 [2024-11-05 16:28:58.033693] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:45.054 [2024-11-05 16:28:58.033708] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:45.054 [2024-11-05 16:28:58.033744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:45.054 [2024-11-05 16:28:58.053223] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:45.054 spare 00:14:45.055 16:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.055 16:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:45.055 [2024-11-05 16:28:58.055478] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:45.993 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:45.993 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.993 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:45.993 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:45.993 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.993 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.993 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.993 16:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.993 16:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.253 16:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.253 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.253 "name": "raid_bdev1", 00:14:46.253 "uuid": "cf67c355-78a3-4811-9117-4c4b19e8e2b0", 00:14:46.253 "strip_size_kb": 0, 00:14:46.253 "state": "online", 00:14:46.253 "raid_level": "raid1", 00:14:46.253 "superblock": true, 00:14:46.253 "num_base_bdevs": 2, 00:14:46.253 "num_base_bdevs_discovered": 2, 00:14:46.253 "num_base_bdevs_operational": 2, 00:14:46.253 "process": { 00:14:46.253 "type": "rebuild", 00:14:46.253 "target": "spare", 00:14:46.253 "progress": { 00:14:46.253 "blocks": 20480, 00:14:46.253 "percent": 32 00:14:46.253 } 00:14:46.253 }, 00:14:46.253 "base_bdevs_list": [ 00:14:46.253 { 00:14:46.253 "name": "spare", 00:14:46.253 "uuid": "2fe2b440-8dd8-51a5-8c82-e4680f7a547b", 00:14:46.253 "is_configured": true, 00:14:46.254 "data_offset": 2048, 00:14:46.254 "data_size": 63488 00:14:46.254 }, 00:14:46.254 { 00:14:46.254 "name": "BaseBdev2", 00:14:46.254 "uuid": "9be03c6b-1999-52a8-90f2-244a8a1428fc", 00:14:46.254 "is_configured": true, 00:14:46.254 "data_offset": 2048, 00:14:46.254 "data_size": 63488 00:14:46.254 } 00:14:46.254 ] 00:14:46.254 }' 00:14:46.254 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.254 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:46.254 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.254 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:46.254 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:46.254 16:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.254 16:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.254 [2024-11-05 16:28:59.222786] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:46.254 [2024-11-05 16:28:59.261847] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:46.254 [2024-11-05 16:28:59.261949] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.254 [2024-11-05 16:28:59.261971] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:46.254 [2024-11-05 16:28:59.261980] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:46.254 16:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.254 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:46.254 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.254 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.254 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:46.254 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:46.254 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:46.254 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.254 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.254 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.254 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.254 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.254 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.254 16:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.254 16:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.254 16:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.513 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.513 "name": "raid_bdev1", 00:14:46.513 "uuid": "cf67c355-78a3-4811-9117-4c4b19e8e2b0", 00:14:46.513 "strip_size_kb": 0, 00:14:46.513 "state": "online", 00:14:46.513 "raid_level": "raid1", 00:14:46.513 "superblock": true, 00:14:46.513 "num_base_bdevs": 2, 00:14:46.513 "num_base_bdevs_discovered": 1, 00:14:46.513 "num_base_bdevs_operational": 1, 00:14:46.513 "base_bdevs_list": [ 00:14:46.513 { 00:14:46.513 "name": null, 00:14:46.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.513 "is_configured": false, 00:14:46.513 "data_offset": 0, 00:14:46.513 "data_size": 63488 00:14:46.513 }, 00:14:46.513 { 00:14:46.513 "name": "BaseBdev2", 00:14:46.513 "uuid": "9be03c6b-1999-52a8-90f2-244a8a1428fc", 00:14:46.513 "is_configured": true, 00:14:46.513 "data_offset": 2048, 00:14:46.513 "data_size": 63488 00:14:46.513 } 00:14:46.513 ] 00:14:46.513 }' 00:14:46.513 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.513 16:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.771 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:46.771 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.771 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:46.771 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:46.771 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.771 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.771 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.771 16:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.771 16:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.771 16:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.771 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.771 "name": "raid_bdev1", 00:14:46.771 "uuid": "cf67c355-78a3-4811-9117-4c4b19e8e2b0", 00:14:46.771 "strip_size_kb": 0, 00:14:46.771 "state": "online", 00:14:46.771 "raid_level": "raid1", 00:14:46.771 "superblock": true, 00:14:46.771 "num_base_bdevs": 2, 00:14:46.771 "num_base_bdevs_discovered": 1, 00:14:46.771 "num_base_bdevs_operational": 1, 00:14:46.771 "base_bdevs_list": [ 00:14:46.771 { 00:14:46.771 "name": null, 00:14:46.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.771 "is_configured": false, 00:14:46.771 "data_offset": 0, 00:14:46.771 "data_size": 63488 00:14:46.771 }, 00:14:46.771 { 00:14:46.772 "name": "BaseBdev2", 00:14:46.772 "uuid": "9be03c6b-1999-52a8-90f2-244a8a1428fc", 00:14:46.772 "is_configured": true, 00:14:46.772 "data_offset": 2048, 00:14:46.772 "data_size": 63488 00:14:46.772 } 00:14:46.772 ] 00:14:46.772 }' 00:14:46.772 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.031 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:47.031 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.031 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:47.031 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:47.031 16:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.031 16:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.031 16:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.031 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:47.031 16:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.031 16:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.031 [2024-11-05 16:28:59.935669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:47.031 [2024-11-05 16:28:59.935744] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.031 [2024-11-05 16:28:59.935772] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:47.031 [2024-11-05 16:28:59.935794] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.031 [2024-11-05 16:28:59.936334] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.031 [2024-11-05 16:28:59.936355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:47.031 [2024-11-05 16:28:59.936460] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:47.031 [2024-11-05 16:28:59.936477] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:47.031 [2024-11-05 16:28:59.936488] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:47.031 [2024-11-05 16:28:59.936501] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:47.031 BaseBdev1 00:14:47.031 16:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.031 16:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:47.969 16:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:47.969 16:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:47.969 16:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.969 16:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:47.969 16:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:47.969 16:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:47.969 16:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.969 16:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.969 16:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.969 16:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.969 16:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.969 16:29:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.969 16:29:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.969 16:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.969 16:29:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.969 16:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.969 "name": "raid_bdev1", 00:14:47.969 "uuid": "cf67c355-78a3-4811-9117-4c4b19e8e2b0", 00:14:47.969 "strip_size_kb": 0, 00:14:47.969 "state": "online", 00:14:47.969 "raid_level": "raid1", 00:14:47.969 "superblock": true, 00:14:47.969 "num_base_bdevs": 2, 00:14:47.969 "num_base_bdevs_discovered": 1, 00:14:47.969 "num_base_bdevs_operational": 1, 00:14:47.969 "base_bdevs_list": [ 00:14:47.969 { 00:14:47.969 "name": null, 00:14:47.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.969 "is_configured": false, 00:14:47.969 "data_offset": 0, 00:14:47.969 "data_size": 63488 00:14:47.969 }, 00:14:47.969 { 00:14:47.969 "name": "BaseBdev2", 00:14:47.969 "uuid": "9be03c6b-1999-52a8-90f2-244a8a1428fc", 00:14:47.969 "is_configured": true, 00:14:47.969 "data_offset": 2048, 00:14:47.969 "data_size": 63488 00:14:47.969 } 00:14:47.969 ] 00:14:47.969 }' 00:14:47.969 16:29:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.969 16:29:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.538 16:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:48.538 16:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.538 16:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:48.538 16:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:48.538 16:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.538 16:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.538 16:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.538 16:29:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.538 16:29:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.538 16:29:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.538 16:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.538 "name": "raid_bdev1", 00:14:48.538 "uuid": "cf67c355-78a3-4811-9117-4c4b19e8e2b0", 00:14:48.538 "strip_size_kb": 0, 00:14:48.538 "state": "online", 00:14:48.538 "raid_level": "raid1", 00:14:48.538 "superblock": true, 00:14:48.538 "num_base_bdevs": 2, 00:14:48.538 "num_base_bdevs_discovered": 1, 00:14:48.538 "num_base_bdevs_operational": 1, 00:14:48.538 "base_bdevs_list": [ 00:14:48.538 { 00:14:48.538 "name": null, 00:14:48.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.538 "is_configured": false, 00:14:48.538 "data_offset": 0, 00:14:48.538 "data_size": 63488 00:14:48.538 }, 00:14:48.538 { 00:14:48.538 "name": "BaseBdev2", 00:14:48.538 "uuid": "9be03c6b-1999-52a8-90f2-244a8a1428fc", 00:14:48.538 "is_configured": true, 00:14:48.538 "data_offset": 2048, 00:14:48.538 "data_size": 63488 00:14:48.538 } 00:14:48.538 ] 00:14:48.538 }' 00:14:48.538 16:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.538 16:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:48.538 16:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.538 16:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:48.538 16:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:48.538 16:29:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:14:48.538 16:29:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:48.538 16:29:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:48.538 16:29:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:48.538 16:29:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:48.538 16:29:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:48.538 16:29:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:48.538 16:29:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.538 16:29:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.538 [2024-11-05 16:29:01.521108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:48.538 [2024-11-05 16:29:01.521285] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:48.538 [2024-11-05 16:29:01.521303] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:48.538 request: 00:14:48.538 { 00:14:48.538 "base_bdev": "BaseBdev1", 00:14:48.538 "raid_bdev": "raid_bdev1", 00:14:48.538 "method": "bdev_raid_add_base_bdev", 00:14:48.538 "req_id": 1 00:14:48.538 } 00:14:48.538 Got JSON-RPC error response 00:14:48.538 response: 00:14:48.538 { 00:14:48.538 "code": -22, 00:14:48.538 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:48.538 } 00:14:48.538 16:29:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:48.538 16:29:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:14:48.538 16:29:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:48.538 16:29:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:48.538 16:29:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:48.538 16:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:49.476 16:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:49.476 16:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.476 16:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.477 16:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.477 16:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.477 16:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:49.477 16:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.477 16:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.477 16:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.477 16:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.477 16:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.477 16:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.477 16:29:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.477 16:29:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.477 16:29:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.764 16:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.764 "name": "raid_bdev1", 00:14:49.764 "uuid": "cf67c355-78a3-4811-9117-4c4b19e8e2b0", 00:14:49.764 "strip_size_kb": 0, 00:14:49.764 "state": "online", 00:14:49.764 "raid_level": "raid1", 00:14:49.764 "superblock": true, 00:14:49.764 "num_base_bdevs": 2, 00:14:49.764 "num_base_bdevs_discovered": 1, 00:14:49.764 "num_base_bdevs_operational": 1, 00:14:49.764 "base_bdevs_list": [ 00:14:49.764 { 00:14:49.764 "name": null, 00:14:49.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.764 "is_configured": false, 00:14:49.764 "data_offset": 0, 00:14:49.764 "data_size": 63488 00:14:49.764 }, 00:14:49.764 { 00:14:49.764 "name": "BaseBdev2", 00:14:49.764 "uuid": "9be03c6b-1999-52a8-90f2-244a8a1428fc", 00:14:49.764 "is_configured": true, 00:14:49.764 "data_offset": 2048, 00:14:49.764 "data_size": 63488 00:14:49.764 } 00:14:49.764 ] 00:14:49.764 }' 00:14:49.764 16:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.764 16:29:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.023 16:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:50.023 16:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.023 16:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:50.023 16:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:50.023 16:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.023 16:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.023 16:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.023 16:29:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.023 16:29:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.023 16:29:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.023 16:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.023 "name": "raid_bdev1", 00:14:50.023 "uuid": "cf67c355-78a3-4811-9117-4c4b19e8e2b0", 00:14:50.023 "strip_size_kb": 0, 00:14:50.023 "state": "online", 00:14:50.023 "raid_level": "raid1", 00:14:50.023 "superblock": true, 00:14:50.023 "num_base_bdevs": 2, 00:14:50.023 "num_base_bdevs_discovered": 1, 00:14:50.023 "num_base_bdevs_operational": 1, 00:14:50.023 "base_bdevs_list": [ 00:14:50.023 { 00:14:50.023 "name": null, 00:14:50.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.023 "is_configured": false, 00:14:50.023 "data_offset": 0, 00:14:50.023 "data_size": 63488 00:14:50.023 }, 00:14:50.023 { 00:14:50.023 "name": "BaseBdev2", 00:14:50.023 "uuid": "9be03c6b-1999-52a8-90f2-244a8a1428fc", 00:14:50.023 "is_configured": true, 00:14:50.023 "data_offset": 2048, 00:14:50.023 "data_size": 63488 00:14:50.023 } 00:14:50.023 ] 00:14:50.023 }' 00:14:50.023 16:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.023 16:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:50.023 16:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.282 16:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:50.282 16:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 76078 00:14:50.282 16:29:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 76078 ']' 00:14:50.282 16:29:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 76078 00:14:50.282 16:29:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:14:50.282 16:29:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:50.282 16:29:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76078 00:14:50.282 16:29:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:50.282 16:29:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:50.282 killing process with pid 76078 00:14:50.282 16:29:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76078' 00:14:50.282 16:29:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 76078 00:14:50.282 Received shutdown signal, test time was about 60.000000 seconds 00:14:50.282 00:14:50.282 Latency(us) 00:14:50.282 [2024-11-05T16:29:03.370Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.282 [2024-11-05T16:29:03.370Z] =================================================================================================================== 00:14:50.282 [2024-11-05T16:29:03.370Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:50.282 [2024-11-05 16:29:03.193348] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:50.282 16:29:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 76078 00:14:50.282 [2024-11-05 16:29:03.193497] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:50.282 [2024-11-05 16:29:03.193568] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:50.282 [2024-11-05 16:29:03.193582] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:50.542 [2024-11-05 16:29:03.505577] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:51.922 16:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:51.922 00:14:51.922 real 0m24.884s 00:14:51.922 user 0m30.266s 00:14:51.922 sys 0m4.190s 00:14:51.922 16:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:51.922 16:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.922 ************************************ 00:14:51.922 END TEST raid_rebuild_test_sb 00:14:51.922 ************************************ 00:14:51.922 16:29:04 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:14:51.922 16:29:04 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:14:51.922 16:29:04 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:51.922 16:29:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:51.922 ************************************ 00:14:51.922 START TEST raid_rebuild_test_io 00:14:51.922 ************************************ 00:14:51.922 16:29:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 false true true 00:14:51.922 16:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:51.922 16:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:51.922 16:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:51.922 16:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:51.922 16:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:51.922 16:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:51.922 16:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:51.922 16:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:51.922 16:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:51.922 16:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:51.922 16:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:51.922 16:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:51.922 16:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:51.922 16:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:51.922 16:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:51.922 16:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:51.922 16:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:51.922 16:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:51.922 16:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:51.922 16:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:51.922 16:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:51.922 16:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:51.922 16:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:51.922 16:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76819 00:14:51.922 16:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:51.922 16:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76819 00:14:51.922 16:29:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # '[' -z 76819 ']' 00:14:51.922 16:29:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.922 16:29:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:51.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.922 16:29:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.922 16:29:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:51.922 16:29:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.922 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:51.922 Zero copy mechanism will not be used. 00:14:51.922 [2024-11-05 16:29:04.811036] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:14:51.922 [2024-11-05 16:29:04.811152] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76819 ] 00:14:51.922 [2024-11-05 16:29:04.987622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.181 [2024-11-05 16:29:05.111567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.439 [2024-11-05 16:29:05.327548] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:52.439 [2024-11-05 16:29:05.327588] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:52.698 16:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:52.698 16:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # return 0 00:14:52.698 16:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:52.698 16:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:52.698 16:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.698 16:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.698 BaseBdev1_malloc 00:14:52.698 16:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.698 16:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:52.698 16:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.698 16:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.698 [2024-11-05 16:29:05.697210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:52.698 [2024-11-05 16:29:05.697273] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.698 [2024-11-05 16:29:05.697297] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:52.698 [2024-11-05 16:29:05.697309] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.698 [2024-11-05 16:29:05.699420] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.698 [2024-11-05 16:29:05.699457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:52.698 BaseBdev1 00:14:52.698 16:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.698 16:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:52.698 16:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:52.698 16:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.698 16:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.698 BaseBdev2_malloc 00:14:52.698 16:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.698 16:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:52.698 16:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.698 16:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.698 [2024-11-05 16:29:05.752636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:52.698 [2024-11-05 16:29:05.752699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.698 [2024-11-05 16:29:05.752720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:52.698 [2024-11-05 16:29:05.752732] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.699 [2024-11-05 16:29:05.754810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.699 [2024-11-05 16:29:05.754846] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:52.699 BaseBdev2 00:14:52.699 16:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.699 16:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:52.699 16:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.699 16:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.957 spare_malloc 00:14:52.957 16:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.957 16:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:52.957 16:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.957 16:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.957 spare_delay 00:14:52.957 16:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.957 16:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:52.957 16:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.957 16:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.957 [2024-11-05 16:29:05.831376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:52.957 [2024-11-05 16:29:05.831437] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.957 [2024-11-05 16:29:05.831458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:52.957 [2024-11-05 16:29:05.831469] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.957 [2024-11-05 16:29:05.833760] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.957 [2024-11-05 16:29:05.833797] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:52.957 spare 00:14:52.957 16:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.957 16:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:52.957 16:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.957 16:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.957 [2024-11-05 16:29:05.843406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:52.957 [2024-11-05 16:29:05.845330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:52.957 [2024-11-05 16:29:05.845441] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:52.957 [2024-11-05 16:29:05.845457] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:52.957 [2024-11-05 16:29:05.845775] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:52.957 [2024-11-05 16:29:05.845954] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:52.957 [2024-11-05 16:29:05.845972] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:52.957 [2024-11-05 16:29:05.846140] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.957 16:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.957 16:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:52.957 16:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.957 16:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.957 16:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.957 16:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.957 16:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:52.957 16:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.957 16:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.957 16:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.957 16:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.957 16:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.957 16:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.957 16:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.957 16:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.957 16:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.957 16:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.957 "name": "raid_bdev1", 00:14:52.957 "uuid": "dcf183de-91b3-461f-91b7-1646951780cc", 00:14:52.957 "strip_size_kb": 0, 00:14:52.957 "state": "online", 00:14:52.957 "raid_level": "raid1", 00:14:52.957 "superblock": false, 00:14:52.957 "num_base_bdevs": 2, 00:14:52.957 "num_base_bdevs_discovered": 2, 00:14:52.957 "num_base_bdevs_operational": 2, 00:14:52.957 "base_bdevs_list": [ 00:14:52.957 { 00:14:52.957 "name": "BaseBdev1", 00:14:52.957 "uuid": "9d6aafc6-d12f-5c7f-b562-097800cbcf34", 00:14:52.957 "is_configured": true, 00:14:52.957 "data_offset": 0, 00:14:52.957 "data_size": 65536 00:14:52.957 }, 00:14:52.957 { 00:14:52.957 "name": "BaseBdev2", 00:14:52.957 "uuid": "f80a445d-daef-569a-ba0b-23beb4465d26", 00:14:52.957 "is_configured": true, 00:14:52.957 "data_offset": 0, 00:14:52.957 "data_size": 65536 00:14:52.957 } 00:14:52.957 ] 00:14:52.957 }' 00:14:52.957 16:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.957 16:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.215 16:29:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:53.215 16:29:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:53.215 16:29:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.474 16:29:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.474 [2024-11-05 16:29:06.310947] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:53.474 16:29:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.474 16:29:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:53.474 16:29:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.474 16:29:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.474 16:29:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.474 16:29:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:53.474 16:29:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.474 16:29:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:53.474 16:29:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:53.474 16:29:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:53.474 16:29:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:53.474 16:29:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.474 16:29:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.474 [2024-11-05 16:29:06.410513] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:53.474 16:29:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.474 16:29:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:53.474 16:29:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:53.474 16:29:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.474 16:29:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:53.474 16:29:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:53.474 16:29:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:53.475 16:29:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.475 16:29:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.475 16:29:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.475 16:29:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.475 16:29:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.475 16:29:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.475 16:29:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.475 16:29:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.475 16:29:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.475 16:29:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.475 "name": "raid_bdev1", 00:14:53.475 "uuid": "dcf183de-91b3-461f-91b7-1646951780cc", 00:14:53.475 "strip_size_kb": 0, 00:14:53.475 "state": "online", 00:14:53.475 "raid_level": "raid1", 00:14:53.475 "superblock": false, 00:14:53.475 "num_base_bdevs": 2, 00:14:53.475 "num_base_bdevs_discovered": 1, 00:14:53.475 "num_base_bdevs_operational": 1, 00:14:53.475 "base_bdevs_list": [ 00:14:53.475 { 00:14:53.475 "name": null, 00:14:53.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.475 "is_configured": false, 00:14:53.475 "data_offset": 0, 00:14:53.475 "data_size": 65536 00:14:53.475 }, 00:14:53.475 { 00:14:53.475 "name": "BaseBdev2", 00:14:53.475 "uuid": "f80a445d-daef-569a-ba0b-23beb4465d26", 00:14:53.475 "is_configured": true, 00:14:53.475 "data_offset": 0, 00:14:53.475 "data_size": 65536 00:14:53.475 } 00:14:53.475 ] 00:14:53.475 }' 00:14:53.475 16:29:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.475 16:29:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.475 [2024-11-05 16:29:06.499115] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:53.475 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:53.475 Zero copy mechanism will not be used. 00:14:53.475 Running I/O for 60 seconds... 00:14:54.043 16:29:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:54.043 16:29:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.043 16:29:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.043 [2024-11-05 16:29:06.876571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:54.043 16:29:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.043 16:29:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:54.043 [2024-11-05 16:29:06.945209] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:54.043 [2024-11-05 16:29:06.947215] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:54.043 [2024-11-05 16:29:07.062545] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:54.043 [2024-11-05 16:29:07.063178] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:54.303 [2024-11-05 16:29:07.280007] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:54.303 [2024-11-05 16:29:07.280332] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:54.562 205.00 IOPS, 615.00 MiB/s [2024-11-05T16:29:07.650Z] [2024-11-05 16:29:07.520976] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:54.821 [2024-11-05 16:29:07.669208] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:55.081 16:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:55.081 16:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.081 16:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:55.081 16:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:55.081 16:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.081 16:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.081 16:29:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.081 16:29:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.081 16:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.081 16:29:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.081 16:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.081 "name": "raid_bdev1", 00:14:55.081 "uuid": "dcf183de-91b3-461f-91b7-1646951780cc", 00:14:55.081 "strip_size_kb": 0, 00:14:55.081 "state": "online", 00:14:55.081 "raid_level": "raid1", 00:14:55.081 "superblock": false, 00:14:55.081 "num_base_bdevs": 2, 00:14:55.081 "num_base_bdevs_discovered": 2, 00:14:55.081 "num_base_bdevs_operational": 2, 00:14:55.081 "process": { 00:14:55.081 "type": "rebuild", 00:14:55.081 "target": "spare", 00:14:55.081 "progress": { 00:14:55.081 "blocks": 12288, 00:14:55.081 "percent": 18 00:14:55.081 } 00:14:55.081 }, 00:14:55.081 "base_bdevs_list": [ 00:14:55.081 { 00:14:55.081 "name": "spare", 00:14:55.081 "uuid": "8c01edc5-b00c-5ba8-b630-99fa19a7fb3a", 00:14:55.081 "is_configured": true, 00:14:55.081 "data_offset": 0, 00:14:55.081 "data_size": 65536 00:14:55.081 }, 00:14:55.081 { 00:14:55.081 "name": "BaseBdev2", 00:14:55.081 "uuid": "f80a445d-daef-569a-ba0b-23beb4465d26", 00:14:55.081 "is_configured": true, 00:14:55.081 "data_offset": 0, 00:14:55.081 "data_size": 65536 00:14:55.081 } 00:14:55.081 ] 00:14:55.081 }' 00:14:55.081 16:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.081 [2024-11-05 16:29:08.002471] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:55.081 16:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:55.081 16:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.081 16:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:55.081 16:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:55.081 16:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.081 16:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.081 [2024-11-05 16:29:08.092943] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:55.081 [2024-11-05 16:29:08.118560] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:55.340 [2024-11-05 16:29:08.219817] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:55.340 [2024-11-05 16:29:08.228452] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.340 [2024-11-05 16:29:08.228615] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:55.340 [2024-11-05 16:29:08.228653] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:55.340 [2024-11-05 16:29:08.283005] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:14:55.340 16:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.340 16:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:55.340 16:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:55.340 16:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:55.340 16:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:55.340 16:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:55.340 16:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:55.340 16:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.340 16:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.340 16:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.340 16:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.341 16:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.341 16:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.341 16:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.341 16:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.341 16:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.341 16:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.341 "name": "raid_bdev1", 00:14:55.341 "uuid": "dcf183de-91b3-461f-91b7-1646951780cc", 00:14:55.341 "strip_size_kb": 0, 00:14:55.341 "state": "online", 00:14:55.341 "raid_level": "raid1", 00:14:55.341 "superblock": false, 00:14:55.341 "num_base_bdevs": 2, 00:14:55.341 "num_base_bdevs_discovered": 1, 00:14:55.341 "num_base_bdevs_operational": 1, 00:14:55.341 "base_bdevs_list": [ 00:14:55.341 { 00:14:55.341 "name": null, 00:14:55.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.341 "is_configured": false, 00:14:55.341 "data_offset": 0, 00:14:55.341 "data_size": 65536 00:14:55.341 }, 00:14:55.341 { 00:14:55.341 "name": "BaseBdev2", 00:14:55.341 "uuid": "f80a445d-daef-569a-ba0b-23beb4465d26", 00:14:55.341 "is_configured": true, 00:14:55.341 "data_offset": 0, 00:14:55.341 "data_size": 65536 00:14:55.341 } 00:14:55.341 ] 00:14:55.341 }' 00:14:55.341 16:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.341 16:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.880 163.00 IOPS, 489.00 MiB/s [2024-11-05T16:29:08.968Z] 16:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:55.880 16:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.880 16:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:55.880 16:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:55.880 16:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.880 16:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.880 16:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.880 16:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.880 16:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.880 16:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.880 16:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.880 "name": "raid_bdev1", 00:14:55.880 "uuid": "dcf183de-91b3-461f-91b7-1646951780cc", 00:14:55.880 "strip_size_kb": 0, 00:14:55.880 "state": "online", 00:14:55.880 "raid_level": "raid1", 00:14:55.880 "superblock": false, 00:14:55.880 "num_base_bdevs": 2, 00:14:55.880 "num_base_bdevs_discovered": 1, 00:14:55.880 "num_base_bdevs_operational": 1, 00:14:55.880 "base_bdevs_list": [ 00:14:55.880 { 00:14:55.880 "name": null, 00:14:55.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.880 "is_configured": false, 00:14:55.880 "data_offset": 0, 00:14:55.880 "data_size": 65536 00:14:55.880 }, 00:14:55.880 { 00:14:55.880 "name": "BaseBdev2", 00:14:55.880 "uuid": "f80a445d-daef-569a-ba0b-23beb4465d26", 00:14:55.880 "is_configured": true, 00:14:55.880 "data_offset": 0, 00:14:55.880 "data_size": 65536 00:14:55.880 } 00:14:55.880 ] 00:14:55.880 }' 00:14:55.880 16:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.880 16:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:55.880 16:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.880 16:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:55.880 16:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:55.880 16:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.880 16:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.880 [2024-11-05 16:29:08.866623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:55.880 16:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.880 16:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:55.880 [2024-11-05 16:29:08.921216] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:55.880 [2024-11-05 16:29:08.923218] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:56.141 [2024-11-05 16:29:09.037810] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:56.141 [2024-11-05 16:29:09.038452] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:56.141 [2024-11-05 16:29:09.149860] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:56.141 [2024-11-05 16:29:09.150191] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:56.709 [2024-11-05 16:29:09.507954] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:56.709 180.67 IOPS, 542.00 MiB/s [2024-11-05T16:29:09.797Z] [2024-11-05 16:29:09.763881] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:56.968 16:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:56.968 16:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.968 16:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:56.968 16:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:56.968 16:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.968 16:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.968 16:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.968 16:29:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.968 16:29:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.968 16:29:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.968 16:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.968 "name": "raid_bdev1", 00:14:56.968 "uuid": "dcf183de-91b3-461f-91b7-1646951780cc", 00:14:56.968 "strip_size_kb": 0, 00:14:56.968 "state": "online", 00:14:56.968 "raid_level": "raid1", 00:14:56.968 "superblock": false, 00:14:56.968 "num_base_bdevs": 2, 00:14:56.968 "num_base_bdevs_discovered": 2, 00:14:56.968 "num_base_bdevs_operational": 2, 00:14:56.968 "process": { 00:14:56.968 "type": "rebuild", 00:14:56.968 "target": "spare", 00:14:56.968 "progress": { 00:14:56.968 "blocks": 14336, 00:14:56.968 "percent": 21 00:14:56.968 } 00:14:56.968 }, 00:14:56.968 "base_bdevs_list": [ 00:14:56.968 { 00:14:56.968 "name": "spare", 00:14:56.968 "uuid": "8c01edc5-b00c-5ba8-b630-99fa19a7fb3a", 00:14:56.968 "is_configured": true, 00:14:56.968 "data_offset": 0, 00:14:56.968 "data_size": 65536 00:14:56.968 }, 00:14:56.968 { 00:14:56.968 "name": "BaseBdev2", 00:14:56.968 "uuid": "f80a445d-daef-569a-ba0b-23beb4465d26", 00:14:56.968 "is_configured": true, 00:14:56.968 "data_offset": 0, 00:14:56.968 "data_size": 65536 00:14:56.968 } 00:14:56.968 ] 00:14:56.968 }' 00:14:56.968 16:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.968 [2024-11-05 16:29:09.973860] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:56.968 [2024-11-05 16:29:09.974321] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:56.968 16:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:56.968 16:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.968 16:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:56.968 16:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:56.968 16:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:56.968 16:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:56.968 16:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:56.968 16:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=422 00:14:56.968 16:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:56.968 16:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:56.968 16:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.968 16:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:56.968 16:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:56.968 16:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.968 16:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.968 16:29:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.968 16:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.968 16:29:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.227 16:29:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.227 16:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:57.227 "name": "raid_bdev1", 00:14:57.227 "uuid": "dcf183de-91b3-461f-91b7-1646951780cc", 00:14:57.227 "strip_size_kb": 0, 00:14:57.227 "state": "online", 00:14:57.227 "raid_level": "raid1", 00:14:57.227 "superblock": false, 00:14:57.227 "num_base_bdevs": 2, 00:14:57.227 "num_base_bdevs_discovered": 2, 00:14:57.227 "num_base_bdevs_operational": 2, 00:14:57.227 "process": { 00:14:57.227 "type": "rebuild", 00:14:57.227 "target": "spare", 00:14:57.227 "progress": { 00:14:57.227 "blocks": 16384, 00:14:57.227 "percent": 25 00:14:57.227 } 00:14:57.227 }, 00:14:57.227 "base_bdevs_list": [ 00:14:57.227 { 00:14:57.227 "name": "spare", 00:14:57.227 "uuid": "8c01edc5-b00c-5ba8-b630-99fa19a7fb3a", 00:14:57.227 "is_configured": true, 00:14:57.227 "data_offset": 0, 00:14:57.227 "data_size": 65536 00:14:57.227 }, 00:14:57.227 { 00:14:57.227 "name": "BaseBdev2", 00:14:57.227 "uuid": "f80a445d-daef-569a-ba0b-23beb4465d26", 00:14:57.227 "is_configured": true, 00:14:57.227 "data_offset": 0, 00:14:57.227 "data_size": 65536 00:14:57.227 } 00:14:57.227 ] 00:14:57.227 }' 00:14:57.227 16:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:57.227 16:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:57.227 16:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:57.227 16:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:57.227 16:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:57.507 [2024-11-05 16:29:10.402900] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:57.507 [2024-11-05 16:29:10.403269] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:58.074 153.00 IOPS, 459.00 MiB/s [2024-11-05T16:29:11.162Z] [2024-11-05 16:29:11.045986] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:58.074 [2024-11-05 16:29:11.162284] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:58.333 16:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:58.333 16:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:58.333 16:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.333 16:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:58.333 16:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:58.333 16:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.333 16:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.333 16:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.333 16:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.333 16:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.333 16:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.333 16:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.333 "name": "raid_bdev1", 00:14:58.333 "uuid": "dcf183de-91b3-461f-91b7-1646951780cc", 00:14:58.333 "strip_size_kb": 0, 00:14:58.333 "state": "online", 00:14:58.333 "raid_level": "raid1", 00:14:58.333 "superblock": false, 00:14:58.333 "num_base_bdevs": 2, 00:14:58.333 "num_base_bdevs_discovered": 2, 00:14:58.333 "num_base_bdevs_operational": 2, 00:14:58.333 "process": { 00:14:58.333 "type": "rebuild", 00:14:58.333 "target": "spare", 00:14:58.333 "progress": { 00:14:58.333 "blocks": 34816, 00:14:58.333 "percent": 53 00:14:58.333 } 00:14:58.333 }, 00:14:58.333 "base_bdevs_list": [ 00:14:58.333 { 00:14:58.333 "name": "spare", 00:14:58.333 "uuid": "8c01edc5-b00c-5ba8-b630-99fa19a7fb3a", 00:14:58.333 "is_configured": true, 00:14:58.333 "data_offset": 0, 00:14:58.333 "data_size": 65536 00:14:58.333 }, 00:14:58.333 { 00:14:58.333 "name": "BaseBdev2", 00:14:58.333 "uuid": "f80a445d-daef-569a-ba0b-23beb4465d26", 00:14:58.333 "is_configured": true, 00:14:58.333 "data_offset": 0, 00:14:58.333 "data_size": 65536 00:14:58.333 } 00:14:58.333 ] 00:14:58.333 }' 00:14:58.333 16:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.333 16:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:58.333 16:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.333 16:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:58.333 16:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:58.593 [2024-11-05 16:29:11.505947] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:58.593 [2024-11-05 16:29:11.506693] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:58.851 134.20 IOPS, 402.60 MiB/s [2024-11-05T16:29:11.939Z] [2024-11-05 16:29:11.728800] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:59.421 16:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:59.421 16:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:59.421 16:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:59.421 16:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:59.421 16:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:59.421 16:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:59.421 16:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.421 16:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.421 16:29:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.421 16:29:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.421 16:29:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.421 16:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:59.421 "name": "raid_bdev1", 00:14:59.421 "uuid": "dcf183de-91b3-461f-91b7-1646951780cc", 00:14:59.421 "strip_size_kb": 0, 00:14:59.421 "state": "online", 00:14:59.421 "raid_level": "raid1", 00:14:59.421 "superblock": false, 00:14:59.421 "num_base_bdevs": 2, 00:14:59.421 "num_base_bdevs_discovered": 2, 00:14:59.421 "num_base_bdevs_operational": 2, 00:14:59.421 "process": { 00:14:59.421 "type": "rebuild", 00:14:59.421 "target": "spare", 00:14:59.421 "progress": { 00:14:59.421 "blocks": 49152, 00:14:59.421 "percent": 75 00:14:59.421 } 00:14:59.421 }, 00:14:59.421 "base_bdevs_list": [ 00:14:59.421 { 00:14:59.421 "name": "spare", 00:14:59.421 "uuid": "8c01edc5-b00c-5ba8-b630-99fa19a7fb3a", 00:14:59.421 "is_configured": true, 00:14:59.421 "data_offset": 0, 00:14:59.421 "data_size": 65536 00:14:59.421 }, 00:14:59.421 { 00:14:59.421 "name": "BaseBdev2", 00:14:59.421 "uuid": "f80a445d-daef-569a-ba0b-23beb4465d26", 00:14:59.421 "is_configured": true, 00:14:59.421 "data_offset": 0, 00:14:59.421 "data_size": 65536 00:14:59.421 } 00:14:59.421 ] 00:14:59.421 }' 00:14:59.421 16:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:59.421 16:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:59.421 16:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:59.421 16:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:59.421 16:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:59.681 117.50 IOPS, 352.50 MiB/s [2024-11-05T16:29:12.769Z] [2024-11-05 16:29:12.739436] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:15:00.251 [2024-11-05 16:29:13.188132] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:00.251 [2024-11-05 16:29:13.288017] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:00.251 [2024-11-05 16:29:13.290391] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.510 16:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:00.510 16:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:00.510 16:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.510 16:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:00.510 16:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:00.510 16:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.510 16:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.510 16:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.511 16:29:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.511 16:29:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.511 16:29:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.511 106.29 IOPS, 318.86 MiB/s [2024-11-05T16:29:13.599Z] 16:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.511 "name": "raid_bdev1", 00:15:00.511 "uuid": "dcf183de-91b3-461f-91b7-1646951780cc", 00:15:00.511 "strip_size_kb": 0, 00:15:00.511 "state": "online", 00:15:00.511 "raid_level": "raid1", 00:15:00.511 "superblock": false, 00:15:00.511 "num_base_bdevs": 2, 00:15:00.511 "num_base_bdevs_discovered": 2, 00:15:00.511 "num_base_bdevs_operational": 2, 00:15:00.511 "base_bdevs_list": [ 00:15:00.511 { 00:15:00.511 "name": "spare", 00:15:00.511 "uuid": "8c01edc5-b00c-5ba8-b630-99fa19a7fb3a", 00:15:00.511 "is_configured": true, 00:15:00.511 "data_offset": 0, 00:15:00.511 "data_size": 65536 00:15:00.511 }, 00:15:00.511 { 00:15:00.511 "name": "BaseBdev2", 00:15:00.511 "uuid": "f80a445d-daef-569a-ba0b-23beb4465d26", 00:15:00.511 "is_configured": true, 00:15:00.511 "data_offset": 0, 00:15:00.511 "data_size": 65536 00:15:00.511 } 00:15:00.511 ] 00:15:00.511 }' 00:15:00.511 16:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.511 16:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:00.511 16:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.770 16:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:00.770 16:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:15:00.770 16:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:00.770 16:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.770 16:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:00.770 16:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:00.770 16:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.770 16:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.770 16:29:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.770 16:29:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.770 16:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.770 16:29:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.770 16:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.770 "name": "raid_bdev1", 00:15:00.770 "uuid": "dcf183de-91b3-461f-91b7-1646951780cc", 00:15:00.770 "strip_size_kb": 0, 00:15:00.770 "state": "online", 00:15:00.770 "raid_level": "raid1", 00:15:00.770 "superblock": false, 00:15:00.770 "num_base_bdevs": 2, 00:15:00.770 "num_base_bdevs_discovered": 2, 00:15:00.770 "num_base_bdevs_operational": 2, 00:15:00.770 "base_bdevs_list": [ 00:15:00.770 { 00:15:00.770 "name": "spare", 00:15:00.770 "uuid": "8c01edc5-b00c-5ba8-b630-99fa19a7fb3a", 00:15:00.770 "is_configured": true, 00:15:00.770 "data_offset": 0, 00:15:00.770 "data_size": 65536 00:15:00.770 }, 00:15:00.770 { 00:15:00.770 "name": "BaseBdev2", 00:15:00.770 "uuid": "f80a445d-daef-569a-ba0b-23beb4465d26", 00:15:00.770 "is_configured": true, 00:15:00.770 "data_offset": 0, 00:15:00.770 "data_size": 65536 00:15:00.770 } 00:15:00.770 ] 00:15:00.770 }' 00:15:00.770 16:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.770 16:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:00.770 16:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.770 16:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:00.770 16:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:00.770 16:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.770 16:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.770 16:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:00.770 16:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:00.770 16:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:00.770 16:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.770 16:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.770 16:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.770 16:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.770 16:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.770 16:29:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.770 16:29:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.770 16:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.770 16:29:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.770 16:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.770 "name": "raid_bdev1", 00:15:00.770 "uuid": "dcf183de-91b3-461f-91b7-1646951780cc", 00:15:00.770 "strip_size_kb": 0, 00:15:00.770 "state": "online", 00:15:00.770 "raid_level": "raid1", 00:15:00.770 "superblock": false, 00:15:00.770 "num_base_bdevs": 2, 00:15:00.770 "num_base_bdevs_discovered": 2, 00:15:00.770 "num_base_bdevs_operational": 2, 00:15:00.770 "base_bdevs_list": [ 00:15:00.770 { 00:15:00.770 "name": "spare", 00:15:00.770 "uuid": "8c01edc5-b00c-5ba8-b630-99fa19a7fb3a", 00:15:00.770 "is_configured": true, 00:15:00.770 "data_offset": 0, 00:15:00.770 "data_size": 65536 00:15:00.770 }, 00:15:00.770 { 00:15:00.770 "name": "BaseBdev2", 00:15:00.770 "uuid": "f80a445d-daef-569a-ba0b-23beb4465d26", 00:15:00.770 "is_configured": true, 00:15:00.770 "data_offset": 0, 00:15:00.770 "data_size": 65536 00:15:00.770 } 00:15:00.770 ] 00:15:00.770 }' 00:15:00.770 16:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.770 16:29:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.338 16:29:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:01.338 16:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.338 16:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.338 [2024-11-05 16:29:14.237135] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:01.338 [2024-11-05 16:29:14.237233] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:01.338 00:15:01.338 Latency(us) 00:15:01.338 [2024-11-05T16:29:14.426Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.338 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:01.338 raid_bdev1 : 7.81 97.74 293.21 0.00 0.00 13539.07 332.69 131873.31 00:15:01.338 [2024-11-05T16:29:14.426Z] =================================================================================================================== 00:15:01.338 [2024-11-05T16:29:14.426Z] Total : 97.74 293.21 0.00 0.00 13539.07 332.69 131873.31 00:15:01.338 [2024-11-05 16:29:14.315659] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.338 [2024-11-05 16:29:14.315758] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:01.338 [2024-11-05 16:29:14.315867] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:01.338 [2024-11-05 16:29:14.315913] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:01.338 { 00:15:01.338 "results": [ 00:15:01.338 { 00:15:01.338 "job": "raid_bdev1", 00:15:01.338 "core_mask": "0x1", 00:15:01.338 "workload": "randrw", 00:15:01.338 "percentage": 50, 00:15:01.338 "status": "finished", 00:15:01.338 "queue_depth": 2, 00:15:01.338 "io_size": 3145728, 00:15:01.338 "runtime": 7.806693, 00:15:01.338 "iops": 97.736647258961, 00:15:01.338 "mibps": 293.209941776883, 00:15:01.338 "io_failed": 0, 00:15:01.338 "io_timeout": 0, 00:15:01.338 "avg_latency_us": 13539.068997922473, 00:15:01.338 "min_latency_us": 332.6882096069869, 00:15:01.338 "max_latency_us": 131873.3135371179 00:15:01.338 } 00:15:01.338 ], 00:15:01.338 "core_count": 1 00:15:01.338 } 00:15:01.338 16:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.338 16:29:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:01.338 16:29:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.338 16:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.338 16:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.338 16:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.338 16:29:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:01.338 16:29:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:01.338 16:29:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:01.338 16:29:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:01.338 16:29:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:01.338 16:29:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:01.338 16:29:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:01.338 16:29:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:01.338 16:29:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:01.338 16:29:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:01.338 16:29:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:01.338 16:29:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:01.338 16:29:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:01.598 /dev/nbd0 00:15:01.598 16:29:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:01.598 16:29:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:01.598 16:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:01.598 16:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:15:01.598 16:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:01.598 16:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:01.598 16:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:01.598 16:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:15:01.598 16:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:01.598 16:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:01.598 16:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:01.598 1+0 records in 00:15:01.598 1+0 records out 00:15:01.598 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530975 s, 7.7 MB/s 00:15:01.598 16:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.598 16:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:15:01.598 16:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.598 16:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:01.598 16:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:15:01.598 16:29:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:01.598 16:29:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:01.598 16:29:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:01.598 16:29:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:15:01.598 16:29:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:15:01.598 16:29:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:01.598 16:29:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:15:01.598 16:29:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:01.598 16:29:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:01.598 16:29:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:01.599 16:29:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:01.599 16:29:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:01.599 16:29:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:01.599 16:29:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:15:01.864 /dev/nbd1 00:15:01.864 16:29:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:01.864 16:29:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:01.864 16:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:01.864 16:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:15:01.864 16:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:01.864 16:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:01.864 16:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:01.864 16:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:15:01.864 16:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:01.864 16:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:01.864 16:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:01.864 1+0 records in 00:15:01.864 1+0 records out 00:15:01.864 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000501663 s, 8.2 MB/s 00:15:01.864 16:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.864 16:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:15:01.864 16:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.864 16:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:01.865 16:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:15:01.865 16:29:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:01.865 16:29:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:01.865 16:29:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:02.133 16:29:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:02.133 16:29:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:02.133 16:29:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:02.133 16:29:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:02.133 16:29:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:02.133 16:29:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:02.133 16:29:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:02.393 16:29:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:02.393 16:29:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:02.393 16:29:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:02.393 16:29:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:02.393 16:29:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:02.393 16:29:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:02.393 16:29:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:02.393 16:29:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:02.393 16:29:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:02.393 16:29:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:02.393 16:29:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:02.393 16:29:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:02.393 16:29:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:02.393 16:29:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:02.393 16:29:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:02.652 16:29:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:02.652 16:29:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:02.652 16:29:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:02.652 16:29:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:02.652 16:29:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:02.653 16:29:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:02.653 16:29:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:02.653 16:29:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:02.653 16:29:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:02.653 16:29:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76819 00:15:02.653 16:29:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # '[' -z 76819 ']' 00:15:02.653 16:29:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # kill -0 76819 00:15:02.653 16:29:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # uname 00:15:02.653 16:29:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:02.653 16:29:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76819 00:15:02.653 killing process with pid 76819 00:15:02.653 Received shutdown signal, test time was about 9.085045 seconds 00:15:02.653 00:15:02.653 Latency(us) 00:15:02.653 [2024-11-05T16:29:15.741Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.653 [2024-11-05T16:29:15.741Z] =================================================================================================================== 00:15:02.653 [2024-11-05T16:29:15.741Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:02.653 16:29:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:02.653 16:29:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:02.653 16:29:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76819' 00:15:02.653 16:29:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # kill 76819 00:15:02.653 [2024-11-05 16:29:15.568844] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:02.653 16:29:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@976 -- # wait 76819 00:15:02.924 [2024-11-05 16:29:15.805349] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:04.305 16:29:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:04.305 00:15:04.305 real 0m12.255s 00:15:04.305 user 0m15.463s 00:15:04.305 sys 0m1.493s 00:15:04.305 16:29:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:04.305 16:29:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.305 ************************************ 00:15:04.305 END TEST raid_rebuild_test_io 00:15:04.305 ************************************ 00:15:04.305 16:29:17 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:15:04.305 16:29:17 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:15:04.305 16:29:17 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:04.305 16:29:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:04.305 ************************************ 00:15:04.305 START TEST raid_rebuild_test_sb_io 00:15:04.305 ************************************ 00:15:04.305 16:29:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true true true 00:15:04.305 16:29:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:04.305 16:29:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:04.305 16:29:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:04.305 16:29:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:04.305 16:29:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:04.305 16:29:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:04.305 16:29:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:04.305 16:29:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:04.305 16:29:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:04.305 16:29:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:04.305 16:29:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:04.305 16:29:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:04.305 16:29:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:04.305 16:29:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:04.305 16:29:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:04.305 16:29:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:04.305 16:29:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:04.305 16:29:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:04.305 16:29:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:04.305 16:29:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:04.305 16:29:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:04.305 16:29:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:04.305 16:29:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:04.305 16:29:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:04.305 16:29:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77195 00:15:04.305 16:29:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:04.305 16:29:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77195 00:15:04.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.305 16:29:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # '[' -z 77195 ']' 00:15:04.305 16:29:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.305 16:29:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:04.305 16:29:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.305 16:29:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:04.305 16:29:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.305 [2024-11-05 16:29:17.142813] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:15:04.305 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:04.305 Zero copy mechanism will not be used. 00:15:04.306 [2024-11-05 16:29:17.143032] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77195 ] 00:15:04.306 [2024-11-05 16:29:17.317950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.565 [2024-11-05 16:29:17.436850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.565 [2024-11-05 16:29:17.645623] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:04.565 [2024-11-05 16:29:17.645663] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:05.135 16:29:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # return 0 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.135 BaseBdev1_malloc 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.135 [2024-11-05 16:29:18.053915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:05.135 [2024-11-05 16:29:18.054013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.135 [2024-11-05 16:29:18.054038] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:05.135 [2024-11-05 16:29:18.054050] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.135 [2024-11-05 16:29:18.056204] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.135 [2024-11-05 16:29:18.056245] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:05.135 BaseBdev1 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.135 BaseBdev2_malloc 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.135 [2024-11-05 16:29:18.111699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:05.135 [2024-11-05 16:29:18.111762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.135 [2024-11-05 16:29:18.111782] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:05.135 [2024-11-05 16:29:18.111794] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.135 [2024-11-05 16:29:18.113886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.135 [2024-11-05 16:29:18.113926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:05.135 BaseBdev2 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.135 spare_malloc 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.135 spare_delay 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.135 [2024-11-05 16:29:18.193812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:05.135 [2024-11-05 16:29:18.193960] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.135 [2024-11-05 16:29:18.193990] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:05.135 [2024-11-05 16:29:18.194003] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.135 [2024-11-05 16:29:18.196566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.135 [2024-11-05 16:29:18.196609] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:05.135 spare 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.135 [2024-11-05 16:29:18.205873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:05.135 [2024-11-05 16:29:18.207933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:05.135 [2024-11-05 16:29:18.208118] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:05.135 [2024-11-05 16:29:18.208137] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:05.135 [2024-11-05 16:29:18.208412] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:05.135 [2024-11-05 16:29:18.208633] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:05.135 [2024-11-05 16:29:18.208644] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:05.135 [2024-11-05 16:29:18.208840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.135 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.394 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.394 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.394 "name": "raid_bdev1", 00:15:05.394 "uuid": "8e744347-1bfc-473c-818d-581991d78f46", 00:15:05.394 "strip_size_kb": 0, 00:15:05.394 "state": "online", 00:15:05.394 "raid_level": "raid1", 00:15:05.394 "superblock": true, 00:15:05.394 "num_base_bdevs": 2, 00:15:05.394 "num_base_bdevs_discovered": 2, 00:15:05.394 "num_base_bdevs_operational": 2, 00:15:05.394 "base_bdevs_list": [ 00:15:05.394 { 00:15:05.394 "name": "BaseBdev1", 00:15:05.394 "uuid": "24609e44-e4de-5ed4-a11e-497210ab1505", 00:15:05.394 "is_configured": true, 00:15:05.394 "data_offset": 2048, 00:15:05.394 "data_size": 63488 00:15:05.394 }, 00:15:05.394 { 00:15:05.394 "name": "BaseBdev2", 00:15:05.394 "uuid": "90f9d905-c152-55d4-b7a2-52ec389d75ed", 00:15:05.394 "is_configured": true, 00:15:05.394 "data_offset": 2048, 00:15:05.394 "data_size": 63488 00:15:05.394 } 00:15:05.394 ] 00:15:05.394 }' 00:15:05.394 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.394 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.654 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:05.654 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:05.654 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.654 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.654 [2024-11-05 16:29:18.693401] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:05.654 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.654 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:05.654 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:05.654 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.654 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.654 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.654 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.914 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:05.914 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:05.914 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:05.914 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:05.914 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.914 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.914 [2024-11-05 16:29:18.776942] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:05.914 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.914 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:05.914 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.914 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.914 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:05.914 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:05.914 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:05.914 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.914 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.914 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.914 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.914 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.914 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.914 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.914 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.914 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.914 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.914 "name": "raid_bdev1", 00:15:05.914 "uuid": "8e744347-1bfc-473c-818d-581991d78f46", 00:15:05.914 "strip_size_kb": 0, 00:15:05.914 "state": "online", 00:15:05.914 "raid_level": "raid1", 00:15:05.914 "superblock": true, 00:15:05.914 "num_base_bdevs": 2, 00:15:05.914 "num_base_bdevs_discovered": 1, 00:15:05.914 "num_base_bdevs_operational": 1, 00:15:05.914 "base_bdevs_list": [ 00:15:05.914 { 00:15:05.914 "name": null, 00:15:05.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.914 "is_configured": false, 00:15:05.914 "data_offset": 0, 00:15:05.914 "data_size": 63488 00:15:05.914 }, 00:15:05.914 { 00:15:05.914 "name": "BaseBdev2", 00:15:05.914 "uuid": "90f9d905-c152-55d4-b7a2-52ec389d75ed", 00:15:05.914 "is_configured": true, 00:15:05.914 "data_offset": 2048, 00:15:05.914 "data_size": 63488 00:15:05.914 } 00:15:05.914 ] 00:15:05.914 }' 00:15:05.914 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.914 16:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.914 [2024-11-05 16:29:18.876773] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:05.914 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:05.914 Zero copy mechanism will not be used. 00:15:05.914 Running I/O for 60 seconds... 00:15:06.483 16:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:06.483 16:29:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.483 16:29:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.483 [2024-11-05 16:29:19.283167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:06.483 16:29:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.483 16:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:06.483 [2024-11-05 16:29:19.358219] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:06.483 [2024-11-05 16:29:19.360413] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:06.483 [2024-11-05 16:29:19.498876] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:06.742 [2024-11-05 16:29:19.647716] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:06.742 [2024-11-05 16:29:19.648183] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:07.002 191.00 IOPS, 573.00 MiB/s [2024-11-05T16:29:20.090Z] [2024-11-05 16:29:20.087226] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:07.261 16:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:07.261 16:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.261 16:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:07.261 16:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:07.261 16:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.261 16:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.261 16:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.261 16:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.261 16:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.521 16:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.521 16:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.521 "name": "raid_bdev1", 00:15:07.521 "uuid": "8e744347-1bfc-473c-818d-581991d78f46", 00:15:07.521 "strip_size_kb": 0, 00:15:07.521 "state": "online", 00:15:07.521 "raid_level": "raid1", 00:15:07.521 "superblock": true, 00:15:07.521 "num_base_bdevs": 2, 00:15:07.521 "num_base_bdevs_discovered": 2, 00:15:07.521 "num_base_bdevs_operational": 2, 00:15:07.521 "process": { 00:15:07.521 "type": "rebuild", 00:15:07.521 "target": "spare", 00:15:07.521 "progress": { 00:15:07.521 "blocks": 12288, 00:15:07.521 "percent": 19 00:15:07.521 } 00:15:07.521 }, 00:15:07.521 "base_bdevs_list": [ 00:15:07.521 { 00:15:07.521 "name": "spare", 00:15:07.521 "uuid": "3a2cb34c-b9b7-59f2-adcc-11a66ff0e77f", 00:15:07.521 "is_configured": true, 00:15:07.521 "data_offset": 2048, 00:15:07.521 "data_size": 63488 00:15:07.521 }, 00:15:07.521 { 00:15:07.521 "name": "BaseBdev2", 00:15:07.521 "uuid": "90f9d905-c152-55d4-b7a2-52ec389d75ed", 00:15:07.521 "is_configured": true, 00:15:07.521 "data_offset": 2048, 00:15:07.521 "data_size": 63488 00:15:07.521 } 00:15:07.521 ] 00:15:07.521 }' 00:15:07.521 16:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.521 16:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:07.521 16:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.521 16:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:07.521 16:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:07.521 16:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.521 16:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.521 [2024-11-05 16:29:20.494872] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:07.521 [2024-11-05 16:29:20.540854] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:07.521 [2024-11-05 16:29:20.541305] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:07.781 [2024-11-05 16:29:20.643227] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:07.781 [2024-11-05 16:29:20.646418] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:07.781 [2024-11-05 16:29:20.646540] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:07.781 [2024-11-05 16:29:20.646566] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:07.781 [2024-11-05 16:29:20.704921] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:15:07.781 16:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.781 16:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:07.781 16:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:07.781 16:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:07.781 16:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:07.781 16:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:07.781 16:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:07.781 16:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.781 16:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.781 16:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.781 16:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.781 16:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.781 16:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.781 16:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.781 16:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.782 16:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.782 16:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.782 "name": "raid_bdev1", 00:15:07.782 "uuid": "8e744347-1bfc-473c-818d-581991d78f46", 00:15:07.782 "strip_size_kb": 0, 00:15:07.782 "state": "online", 00:15:07.782 "raid_level": "raid1", 00:15:07.782 "superblock": true, 00:15:07.782 "num_base_bdevs": 2, 00:15:07.782 "num_base_bdevs_discovered": 1, 00:15:07.782 "num_base_bdevs_operational": 1, 00:15:07.782 "base_bdevs_list": [ 00:15:07.782 { 00:15:07.782 "name": null, 00:15:07.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.782 "is_configured": false, 00:15:07.782 "data_offset": 0, 00:15:07.782 "data_size": 63488 00:15:07.782 }, 00:15:07.782 { 00:15:07.782 "name": "BaseBdev2", 00:15:07.782 "uuid": "90f9d905-c152-55d4-b7a2-52ec389d75ed", 00:15:07.782 "is_configured": true, 00:15:07.782 "data_offset": 2048, 00:15:07.782 "data_size": 63488 00:15:07.782 } 00:15:07.782 ] 00:15:07.782 }' 00:15:07.782 16:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.782 16:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.300 137.00 IOPS, 411.00 MiB/s [2024-11-05T16:29:21.388Z] 16:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:08.300 16:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.300 16:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:08.300 16:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:08.300 16:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.300 16:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.300 16:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.300 16:29:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.300 16:29:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.300 16:29:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.300 16:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.300 "name": "raid_bdev1", 00:15:08.300 "uuid": "8e744347-1bfc-473c-818d-581991d78f46", 00:15:08.300 "strip_size_kb": 0, 00:15:08.300 "state": "online", 00:15:08.300 "raid_level": "raid1", 00:15:08.300 "superblock": true, 00:15:08.300 "num_base_bdevs": 2, 00:15:08.300 "num_base_bdevs_discovered": 1, 00:15:08.300 "num_base_bdevs_operational": 1, 00:15:08.300 "base_bdevs_list": [ 00:15:08.300 { 00:15:08.300 "name": null, 00:15:08.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.300 "is_configured": false, 00:15:08.300 "data_offset": 0, 00:15:08.300 "data_size": 63488 00:15:08.300 }, 00:15:08.300 { 00:15:08.300 "name": "BaseBdev2", 00:15:08.300 "uuid": "90f9d905-c152-55d4-b7a2-52ec389d75ed", 00:15:08.300 "is_configured": true, 00:15:08.300 "data_offset": 2048, 00:15:08.300 "data_size": 63488 00:15:08.300 } 00:15:08.300 ] 00:15:08.300 }' 00:15:08.300 16:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.300 16:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:08.300 16:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.300 16:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:08.300 16:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:08.300 16:29:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.300 16:29:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.300 [2024-11-05 16:29:21.313993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:08.300 16:29:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.300 16:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:08.300 [2024-11-05 16:29:21.369983] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:08.300 [2024-11-05 16:29:21.372192] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:08.560 [2024-11-05 16:29:21.488939] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:08.560 [2024-11-05 16:29:21.489736] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:08.560 [2024-11-05 16:29:21.606973] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:08.560 [2024-11-05 16:29:21.607434] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:08.819 149.67 IOPS, 449.00 MiB/s [2024-11-05T16:29:21.907Z] [2024-11-05 16:29:21.886498] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:09.078 [2024-11-05 16:29:22.111895] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:09.078 [2024-11-05 16:29:22.112230] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:09.337 16:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:09.337 16:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:09.337 16:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:09.337 16:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:09.337 16:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:09.337 16:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.337 16:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.337 16:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.337 16:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.337 16:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.337 16:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:09.337 "name": "raid_bdev1", 00:15:09.337 "uuid": "8e744347-1bfc-473c-818d-581991d78f46", 00:15:09.337 "strip_size_kb": 0, 00:15:09.337 "state": "online", 00:15:09.337 "raid_level": "raid1", 00:15:09.337 "superblock": true, 00:15:09.337 "num_base_bdevs": 2, 00:15:09.337 "num_base_bdevs_discovered": 2, 00:15:09.337 "num_base_bdevs_operational": 2, 00:15:09.337 "process": { 00:15:09.337 "type": "rebuild", 00:15:09.337 "target": "spare", 00:15:09.337 "progress": { 00:15:09.337 "blocks": 14336, 00:15:09.337 "percent": 22 00:15:09.337 } 00:15:09.337 }, 00:15:09.337 "base_bdevs_list": [ 00:15:09.337 { 00:15:09.337 "name": "spare", 00:15:09.337 "uuid": "3a2cb34c-b9b7-59f2-adcc-11a66ff0e77f", 00:15:09.337 "is_configured": true, 00:15:09.337 "data_offset": 2048, 00:15:09.337 "data_size": 63488 00:15:09.337 }, 00:15:09.337 { 00:15:09.337 "name": "BaseBdev2", 00:15:09.337 "uuid": "90f9d905-c152-55d4-b7a2-52ec389d75ed", 00:15:09.337 "is_configured": true, 00:15:09.337 "data_offset": 2048, 00:15:09.337 "data_size": 63488 00:15:09.337 } 00:15:09.337 ] 00:15:09.337 }' 00:15:09.337 16:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.596 16:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:09.596 16:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.596 16:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:09.596 16:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:09.596 16:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:09.596 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:09.596 16:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:09.596 16:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:09.596 16:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:09.596 16:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=434 00:15:09.596 16:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:09.596 16:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:09.596 16:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:09.596 16:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:09.596 16:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:09.596 16:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:09.596 16:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.596 16:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.596 16:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.596 16:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.596 16:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.596 16:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:09.596 "name": "raid_bdev1", 00:15:09.596 "uuid": "8e744347-1bfc-473c-818d-581991d78f46", 00:15:09.596 "strip_size_kb": 0, 00:15:09.596 "state": "online", 00:15:09.596 "raid_level": "raid1", 00:15:09.596 "superblock": true, 00:15:09.596 "num_base_bdevs": 2, 00:15:09.596 "num_base_bdevs_discovered": 2, 00:15:09.596 "num_base_bdevs_operational": 2, 00:15:09.596 "process": { 00:15:09.596 "type": "rebuild", 00:15:09.596 "target": "spare", 00:15:09.596 "progress": { 00:15:09.596 "blocks": 16384, 00:15:09.596 "percent": 25 00:15:09.596 } 00:15:09.596 }, 00:15:09.596 "base_bdevs_list": [ 00:15:09.596 { 00:15:09.596 "name": "spare", 00:15:09.596 "uuid": "3a2cb34c-b9b7-59f2-adcc-11a66ff0e77f", 00:15:09.596 "is_configured": true, 00:15:09.596 "data_offset": 2048, 00:15:09.596 "data_size": 63488 00:15:09.596 }, 00:15:09.596 { 00:15:09.596 "name": "BaseBdev2", 00:15:09.596 "uuid": "90f9d905-c152-55d4-b7a2-52ec389d75ed", 00:15:09.596 "is_configured": true, 00:15:09.596 "data_offset": 2048, 00:15:09.596 "data_size": 63488 00:15:09.596 } 00:15:09.596 ] 00:15:09.596 }' 00:15:09.596 16:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.596 16:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:09.596 16:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.596 16:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:09.596 16:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:09.864 [2024-11-05 16:29:22.707810] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:09.864 [2024-11-05 16:29:22.834722] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:10.147 134.25 IOPS, 402.75 MiB/s [2024-11-05T16:29:23.235Z] [2024-11-05 16:29:23.161890] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:10.147 [2024-11-05 16:29:23.162272] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:10.717 16:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:10.717 16:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:10.717 16:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.717 16:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:10.717 16:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:10.717 16:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.717 16:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.717 16:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.717 16:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.717 16:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.717 16:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.717 16:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.717 "name": "raid_bdev1", 00:15:10.717 "uuid": "8e744347-1bfc-473c-818d-581991d78f46", 00:15:10.717 "strip_size_kb": 0, 00:15:10.717 "state": "online", 00:15:10.717 "raid_level": "raid1", 00:15:10.717 "superblock": true, 00:15:10.717 "num_base_bdevs": 2, 00:15:10.717 "num_base_bdevs_discovered": 2, 00:15:10.717 "num_base_bdevs_operational": 2, 00:15:10.717 "process": { 00:15:10.717 "type": "rebuild", 00:15:10.717 "target": "spare", 00:15:10.717 "progress": { 00:15:10.717 "blocks": 36864, 00:15:10.717 "percent": 58 00:15:10.717 } 00:15:10.717 }, 00:15:10.717 "base_bdevs_list": [ 00:15:10.717 { 00:15:10.717 "name": "spare", 00:15:10.717 "uuid": "3a2cb34c-b9b7-59f2-adcc-11a66ff0e77f", 00:15:10.717 "is_configured": true, 00:15:10.717 "data_offset": 2048, 00:15:10.717 "data_size": 63488 00:15:10.717 }, 00:15:10.717 { 00:15:10.717 "name": "BaseBdev2", 00:15:10.717 "uuid": "90f9d905-c152-55d4-b7a2-52ec389d75ed", 00:15:10.717 "is_configured": true, 00:15:10.717 "data_offset": 2048, 00:15:10.717 "data_size": 63488 00:15:10.717 } 00:15:10.717 ] 00:15:10.717 }' 00:15:10.717 16:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.717 16:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:10.717 16:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.717 [2024-11-05 16:29:23.796939] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:15:10.717 16:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:10.717 16:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:11.236 116.00 IOPS, 348.00 MiB/s [2024-11-05T16:29:24.324Z] [2024-11-05 16:29:24.241733] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:15:11.495 [2024-11-05 16:29:24.578706] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:15:11.754 16:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:11.754 16:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:11.754 16:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.754 16:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:11.754 16:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:11.754 16:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.754 16:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.754 16:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.754 16:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.754 16:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.754 16:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.014 16:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.014 "name": "raid_bdev1", 00:15:12.014 "uuid": "8e744347-1bfc-473c-818d-581991d78f46", 00:15:12.014 "strip_size_kb": 0, 00:15:12.014 "state": "online", 00:15:12.014 "raid_level": "raid1", 00:15:12.014 "superblock": true, 00:15:12.014 "num_base_bdevs": 2, 00:15:12.014 "num_base_bdevs_discovered": 2, 00:15:12.014 "num_base_bdevs_operational": 2, 00:15:12.014 "process": { 00:15:12.014 "type": "rebuild", 00:15:12.014 "target": "spare", 00:15:12.014 "progress": { 00:15:12.014 "blocks": 53248, 00:15:12.014 "percent": 83 00:15:12.014 } 00:15:12.014 }, 00:15:12.014 "base_bdevs_list": [ 00:15:12.014 { 00:15:12.014 "name": "spare", 00:15:12.014 "uuid": "3a2cb34c-b9b7-59f2-adcc-11a66ff0e77f", 00:15:12.014 "is_configured": true, 00:15:12.014 "data_offset": 2048, 00:15:12.014 "data_size": 63488 00:15:12.014 }, 00:15:12.014 { 00:15:12.014 "name": "BaseBdev2", 00:15:12.014 "uuid": "90f9d905-c152-55d4-b7a2-52ec389d75ed", 00:15:12.014 "is_configured": true, 00:15:12.014 "data_offset": 2048, 00:15:12.014 "data_size": 63488 00:15:12.014 } 00:15:12.014 ] 00:15:12.014 }' 00:15:12.014 16:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.014 102.50 IOPS, 307.50 MiB/s [2024-11-05T16:29:25.103Z] 16:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:12.015 16:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.015 16:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:12.015 16:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:12.015 [2024-11-05 16:29:25.011674] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:15:12.274 [2024-11-05 16:29:25.349736] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:12.534 [2024-11-05 16:29:25.449478] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:12.534 [2024-11-05 16:29:25.452344] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.054 94.57 IOPS, 283.71 MiB/s [2024-11-05T16:29:26.142Z] 16:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:13.054 16:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:13.054 16:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.054 16:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:13.054 16:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:13.054 16:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.054 16:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.054 16:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.054 16:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.054 16:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.054 16:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.054 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.054 "name": "raid_bdev1", 00:15:13.054 "uuid": "8e744347-1bfc-473c-818d-581991d78f46", 00:15:13.054 "strip_size_kb": 0, 00:15:13.054 "state": "online", 00:15:13.054 "raid_level": "raid1", 00:15:13.054 "superblock": true, 00:15:13.054 "num_base_bdevs": 2, 00:15:13.054 "num_base_bdevs_discovered": 2, 00:15:13.054 "num_base_bdevs_operational": 2, 00:15:13.054 "base_bdevs_list": [ 00:15:13.054 { 00:15:13.054 "name": "spare", 00:15:13.054 "uuid": "3a2cb34c-b9b7-59f2-adcc-11a66ff0e77f", 00:15:13.054 "is_configured": true, 00:15:13.054 "data_offset": 2048, 00:15:13.054 "data_size": 63488 00:15:13.054 }, 00:15:13.054 { 00:15:13.054 "name": "BaseBdev2", 00:15:13.054 "uuid": "90f9d905-c152-55d4-b7a2-52ec389d75ed", 00:15:13.054 "is_configured": true, 00:15:13.054 "data_offset": 2048, 00:15:13.054 "data_size": 63488 00:15:13.054 } 00:15:13.054 ] 00:15:13.054 }' 00:15:13.054 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.054 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:13.054 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.054 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:13.054 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:15:13.054 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:13.054 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.054 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:13.054 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:13.054 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.054 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.054 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.054 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.055 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.055 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.055 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.055 "name": "raid_bdev1", 00:15:13.055 "uuid": "8e744347-1bfc-473c-818d-581991d78f46", 00:15:13.055 "strip_size_kb": 0, 00:15:13.055 "state": "online", 00:15:13.055 "raid_level": "raid1", 00:15:13.055 "superblock": true, 00:15:13.055 "num_base_bdevs": 2, 00:15:13.055 "num_base_bdevs_discovered": 2, 00:15:13.055 "num_base_bdevs_operational": 2, 00:15:13.055 "base_bdevs_list": [ 00:15:13.055 { 00:15:13.055 "name": "spare", 00:15:13.055 "uuid": "3a2cb34c-b9b7-59f2-adcc-11a66ff0e77f", 00:15:13.055 "is_configured": true, 00:15:13.055 "data_offset": 2048, 00:15:13.055 "data_size": 63488 00:15:13.055 }, 00:15:13.055 { 00:15:13.055 "name": "BaseBdev2", 00:15:13.055 "uuid": "90f9d905-c152-55d4-b7a2-52ec389d75ed", 00:15:13.055 "is_configured": true, 00:15:13.055 "data_offset": 2048, 00:15:13.055 "data_size": 63488 00:15:13.055 } 00:15:13.055 ] 00:15:13.055 }' 00:15:13.055 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.314 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:13.314 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.314 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:13.314 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:13.314 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.314 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.314 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:13.314 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:13.314 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:13.314 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.314 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.314 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.314 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.314 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.314 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.314 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.314 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.314 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.314 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.314 "name": "raid_bdev1", 00:15:13.314 "uuid": "8e744347-1bfc-473c-818d-581991d78f46", 00:15:13.314 "strip_size_kb": 0, 00:15:13.314 "state": "online", 00:15:13.314 "raid_level": "raid1", 00:15:13.314 "superblock": true, 00:15:13.314 "num_base_bdevs": 2, 00:15:13.314 "num_base_bdevs_discovered": 2, 00:15:13.314 "num_base_bdevs_operational": 2, 00:15:13.314 "base_bdevs_list": [ 00:15:13.314 { 00:15:13.314 "name": "spare", 00:15:13.314 "uuid": "3a2cb34c-b9b7-59f2-adcc-11a66ff0e77f", 00:15:13.314 "is_configured": true, 00:15:13.314 "data_offset": 2048, 00:15:13.314 "data_size": 63488 00:15:13.314 }, 00:15:13.314 { 00:15:13.314 "name": "BaseBdev2", 00:15:13.314 "uuid": "90f9d905-c152-55d4-b7a2-52ec389d75ed", 00:15:13.314 "is_configured": true, 00:15:13.314 "data_offset": 2048, 00:15:13.314 "data_size": 63488 00:15:13.314 } 00:15:13.314 ] 00:15:13.314 }' 00:15:13.314 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.314 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.883 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:13.883 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.883 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.883 [2024-11-05 16:29:26.672168] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:13.883 [2024-11-05 16:29:26.672203] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:13.883 00:15:13.883 Latency(us) 00:15:13.883 [2024-11-05T16:29:26.971Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.883 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:13.883 raid_bdev1 : 7.88 87.35 262.05 0.00 0.00 15901.75 327.32 110352.32 00:15:13.883 [2024-11-05T16:29:26.971Z] =================================================================================================================== 00:15:13.883 [2024-11-05T16:29:26.971Z] Total : 87.35 262.05 0.00 0.00 15901.75 327.32 110352.32 00:15:13.883 [2024-11-05 16:29:26.762224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.883 [2024-11-05 16:29:26.762282] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:13.883 [2024-11-05 16:29:26.762365] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:13.883 [2024-11-05 16:29:26.762381] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:13.883 { 00:15:13.883 "results": [ 00:15:13.883 { 00:15:13.883 "job": "raid_bdev1", 00:15:13.883 "core_mask": "0x1", 00:15:13.883 "workload": "randrw", 00:15:13.883 "percentage": 50, 00:15:13.883 "status": "finished", 00:15:13.883 "queue_depth": 2, 00:15:13.883 "io_size": 3145728, 00:15:13.883 "runtime": 7.876288, 00:15:13.883 "iops": 87.35079265765802, 00:15:13.883 "mibps": 262.05237797297406, 00:15:13.883 "io_failed": 0, 00:15:13.883 "io_timeout": 0, 00:15:13.883 "avg_latency_us": 15901.746643647813, 00:15:13.883 "min_latency_us": 327.32227074235806, 00:15:13.883 "max_latency_us": 110352.32139737991 00:15:13.883 } 00:15:13.883 ], 00:15:13.883 "core_count": 1 00:15:13.883 } 00:15:13.883 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.883 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.883 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.883 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.883 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:13.883 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.883 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:13.883 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:13.883 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:13.883 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:13.883 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:13.883 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:13.883 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:13.883 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:13.883 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:13.883 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:13.883 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:13.883 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:13.883 16:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:14.143 /dev/nbd0 00:15:14.143 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:14.143 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:14.143 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:14.143 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:15:14.143 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:14.143 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:14.143 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:14.143 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:15:14.143 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:14.143 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:14.143 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:14.143 1+0 records in 00:15:14.143 1+0 records out 00:15:14.143 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311034 s, 13.2 MB/s 00:15:14.143 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.143 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:15:14.143 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.143 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:14.143 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:15:14.143 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:14.143 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:14.143 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:14.143 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:15:14.143 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:15:14.143 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:14.143 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:15:14.143 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:14.143 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:14.143 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:14.143 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:14.143 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:14.143 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:14.143 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:15:14.403 /dev/nbd1 00:15:14.403 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:14.403 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:14.403 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:14.403 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:15:14.403 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:14.403 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:14.403 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:14.403 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:15:14.403 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:14.403 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:14.404 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:14.404 1+0 records in 00:15:14.404 1+0 records out 00:15:14.404 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000554821 s, 7.4 MB/s 00:15:14.404 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.404 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:15:14.404 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.404 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:14.404 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:15:14.404 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:14.404 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:14.404 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:14.663 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:14.663 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:14.663 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:14.663 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:14.663 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:14.663 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:14.663 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:14.922 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:14.922 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:14.922 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:14.922 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:14.922 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:14.922 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:14.922 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:14.922 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:14.922 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:14.922 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:14.922 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:14.922 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:14.922 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:14.922 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:14.922 16:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:15.181 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:15.181 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:15.181 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:15.181 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:15.181 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:15.181 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:15.181 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:15.181 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:15.181 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:15.181 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:15.181 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.181 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.181 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.181 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:15.181 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.181 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.181 [2024-11-05 16:29:28.083707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:15.181 [2024-11-05 16:29:28.083807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.181 [2024-11-05 16:29:28.083832] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:15.181 [2024-11-05 16:29:28.083844] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.181 [2024-11-05 16:29:28.086457] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.181 [2024-11-05 16:29:28.086614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:15.181 [2024-11-05 16:29:28.086755] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:15.181 [2024-11-05 16:29:28.086844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:15.181 [2024-11-05 16:29:28.087098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:15.181 spare 00:15:15.181 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.181 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:15.181 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.181 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.181 [2024-11-05 16:29:28.187056] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:15.181 [2024-11-05 16:29:28.187214] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:15.181 [2024-11-05 16:29:28.187669] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:15:15.181 [2024-11-05 16:29:28.187961] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:15.181 [2024-11-05 16:29:28.188022] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:15.181 [2024-11-05 16:29:28.188344] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.181 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.181 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:15.181 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.181 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.181 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:15.181 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:15.181 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:15.181 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.181 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.181 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.181 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.181 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.181 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.181 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.181 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.181 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.181 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.181 "name": "raid_bdev1", 00:15:15.181 "uuid": "8e744347-1bfc-473c-818d-581991d78f46", 00:15:15.181 "strip_size_kb": 0, 00:15:15.181 "state": "online", 00:15:15.181 "raid_level": "raid1", 00:15:15.181 "superblock": true, 00:15:15.181 "num_base_bdevs": 2, 00:15:15.181 "num_base_bdevs_discovered": 2, 00:15:15.181 "num_base_bdevs_operational": 2, 00:15:15.181 "base_bdevs_list": [ 00:15:15.181 { 00:15:15.181 "name": "spare", 00:15:15.181 "uuid": "3a2cb34c-b9b7-59f2-adcc-11a66ff0e77f", 00:15:15.181 "is_configured": true, 00:15:15.181 "data_offset": 2048, 00:15:15.181 "data_size": 63488 00:15:15.181 }, 00:15:15.181 { 00:15:15.181 "name": "BaseBdev2", 00:15:15.181 "uuid": "90f9d905-c152-55d4-b7a2-52ec389d75ed", 00:15:15.181 "is_configured": true, 00:15:15.181 "data_offset": 2048, 00:15:15.181 "data_size": 63488 00:15:15.181 } 00:15:15.181 ] 00:15:15.181 }' 00:15:15.181 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.181 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.749 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:15.749 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.749 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:15.749 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:15.749 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.749 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.749 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.749 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.749 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.749 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.749 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.749 "name": "raid_bdev1", 00:15:15.749 "uuid": "8e744347-1bfc-473c-818d-581991d78f46", 00:15:15.749 "strip_size_kb": 0, 00:15:15.749 "state": "online", 00:15:15.749 "raid_level": "raid1", 00:15:15.749 "superblock": true, 00:15:15.749 "num_base_bdevs": 2, 00:15:15.749 "num_base_bdevs_discovered": 2, 00:15:15.749 "num_base_bdevs_operational": 2, 00:15:15.749 "base_bdevs_list": [ 00:15:15.749 { 00:15:15.749 "name": "spare", 00:15:15.749 "uuid": "3a2cb34c-b9b7-59f2-adcc-11a66ff0e77f", 00:15:15.749 "is_configured": true, 00:15:15.749 "data_offset": 2048, 00:15:15.749 "data_size": 63488 00:15:15.749 }, 00:15:15.749 { 00:15:15.749 "name": "BaseBdev2", 00:15:15.749 "uuid": "90f9d905-c152-55d4-b7a2-52ec389d75ed", 00:15:15.749 "is_configured": true, 00:15:15.749 "data_offset": 2048, 00:15:15.749 "data_size": 63488 00:15:15.749 } 00:15:15.749 ] 00:15:15.749 }' 00:15:15.749 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.749 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:15.749 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.749 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:15.749 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.749 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:15.749 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.749 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.749 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.008 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:16.008 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:16.008 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.008 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.008 [2024-11-05 16:29:28.855330] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:16.008 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.008 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:16.008 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:16.008 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:16.008 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:16.008 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:16.008 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:16.008 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.008 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.008 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.008 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.008 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.008 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.008 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.008 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.008 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.008 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.008 "name": "raid_bdev1", 00:15:16.008 "uuid": "8e744347-1bfc-473c-818d-581991d78f46", 00:15:16.008 "strip_size_kb": 0, 00:15:16.008 "state": "online", 00:15:16.008 "raid_level": "raid1", 00:15:16.008 "superblock": true, 00:15:16.008 "num_base_bdevs": 2, 00:15:16.008 "num_base_bdevs_discovered": 1, 00:15:16.008 "num_base_bdevs_operational": 1, 00:15:16.008 "base_bdevs_list": [ 00:15:16.008 { 00:15:16.008 "name": null, 00:15:16.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.008 "is_configured": false, 00:15:16.008 "data_offset": 0, 00:15:16.008 "data_size": 63488 00:15:16.008 }, 00:15:16.008 { 00:15:16.008 "name": "BaseBdev2", 00:15:16.008 "uuid": "90f9d905-c152-55d4-b7a2-52ec389d75ed", 00:15:16.008 "is_configured": true, 00:15:16.008 "data_offset": 2048, 00:15:16.008 "data_size": 63488 00:15:16.008 } 00:15:16.008 ] 00:15:16.008 }' 00:15:16.008 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.008 16:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.266 16:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:16.266 16:29:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.266 16:29:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.266 [2024-11-05 16:29:29.350620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:16.266 [2024-11-05 16:29:29.350868] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:16.266 [2024-11-05 16:29:29.350886] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:16.266 [2024-11-05 16:29:29.350937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:16.526 [2024-11-05 16:29:29.370562] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:15:16.526 16:29:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.526 16:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:16.526 [2024-11-05 16:29:29.372792] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:17.469 16:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:17.469 16:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.469 16:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:17.469 16:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:17.469 16:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.469 16:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.469 16:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.469 16:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.469 16:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.469 16:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.469 16:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.469 "name": "raid_bdev1", 00:15:17.469 "uuid": "8e744347-1bfc-473c-818d-581991d78f46", 00:15:17.469 "strip_size_kb": 0, 00:15:17.469 "state": "online", 00:15:17.469 "raid_level": "raid1", 00:15:17.469 "superblock": true, 00:15:17.469 "num_base_bdevs": 2, 00:15:17.469 "num_base_bdevs_discovered": 2, 00:15:17.469 "num_base_bdevs_operational": 2, 00:15:17.469 "process": { 00:15:17.469 "type": "rebuild", 00:15:17.469 "target": "spare", 00:15:17.469 "progress": { 00:15:17.469 "blocks": 20480, 00:15:17.469 "percent": 32 00:15:17.469 } 00:15:17.469 }, 00:15:17.469 "base_bdevs_list": [ 00:15:17.469 { 00:15:17.469 "name": "spare", 00:15:17.469 "uuid": "3a2cb34c-b9b7-59f2-adcc-11a66ff0e77f", 00:15:17.469 "is_configured": true, 00:15:17.469 "data_offset": 2048, 00:15:17.469 "data_size": 63488 00:15:17.469 }, 00:15:17.469 { 00:15:17.469 "name": "BaseBdev2", 00:15:17.469 "uuid": "90f9d905-c152-55d4-b7a2-52ec389d75ed", 00:15:17.469 "is_configured": true, 00:15:17.469 "data_offset": 2048, 00:15:17.469 "data_size": 63488 00:15:17.469 } 00:15:17.469 ] 00:15:17.469 }' 00:15:17.469 16:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.469 16:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:17.469 16:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.469 16:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:17.469 16:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:17.469 16:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.469 16:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.469 [2024-11-05 16:29:30.528470] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:17.727 [2024-11-05 16:29:30.579159] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:17.727 [2024-11-05 16:29:30.579248] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.727 [2024-11-05 16:29:30.579271] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:17.727 [2024-11-05 16:29:30.579281] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:17.727 16:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.727 16:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:17.727 16:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.727 16:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.727 16:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:17.727 16:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:17.727 16:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:17.727 16:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.727 16:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.727 16:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.727 16:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.727 16:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.727 16:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.727 16:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.727 16:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.727 16:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.727 16:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.727 "name": "raid_bdev1", 00:15:17.727 "uuid": "8e744347-1bfc-473c-818d-581991d78f46", 00:15:17.727 "strip_size_kb": 0, 00:15:17.727 "state": "online", 00:15:17.727 "raid_level": "raid1", 00:15:17.727 "superblock": true, 00:15:17.727 "num_base_bdevs": 2, 00:15:17.727 "num_base_bdevs_discovered": 1, 00:15:17.727 "num_base_bdevs_operational": 1, 00:15:17.727 "base_bdevs_list": [ 00:15:17.727 { 00:15:17.727 "name": null, 00:15:17.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.728 "is_configured": false, 00:15:17.728 "data_offset": 0, 00:15:17.728 "data_size": 63488 00:15:17.728 }, 00:15:17.728 { 00:15:17.728 "name": "BaseBdev2", 00:15:17.728 "uuid": "90f9d905-c152-55d4-b7a2-52ec389d75ed", 00:15:17.728 "is_configured": true, 00:15:17.728 "data_offset": 2048, 00:15:17.728 "data_size": 63488 00:15:17.728 } 00:15:17.728 ] 00:15:17.728 }' 00:15:17.728 16:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.728 16:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.987 16:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:17.987 16:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.987 16:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:18.247 [2024-11-05 16:29:31.079703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:18.247 [2024-11-05 16:29:31.079844] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.247 [2024-11-05 16:29:31.079899] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:18.247 [2024-11-05 16:29:31.079951] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.247 [2024-11-05 16:29:31.080567] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.247 [2024-11-05 16:29:31.080636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:18.247 [2024-11-05 16:29:31.080797] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:18.247 [2024-11-05 16:29:31.080845] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:18.247 [2024-11-05 16:29:31.080899] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:18.247 [2024-11-05 16:29:31.080953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:18.247 [2024-11-05 16:29:31.100464] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:15:18.247 spare 00:15:18.247 16:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.247 16:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:18.247 [2024-11-05 16:29:31.102726] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:19.185 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:19.185 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.186 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:19.186 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:19.186 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.186 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.186 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.186 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.186 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.186 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.186 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.186 "name": "raid_bdev1", 00:15:19.186 "uuid": "8e744347-1bfc-473c-818d-581991d78f46", 00:15:19.186 "strip_size_kb": 0, 00:15:19.186 "state": "online", 00:15:19.186 "raid_level": "raid1", 00:15:19.186 "superblock": true, 00:15:19.186 "num_base_bdevs": 2, 00:15:19.186 "num_base_bdevs_discovered": 2, 00:15:19.186 "num_base_bdevs_operational": 2, 00:15:19.186 "process": { 00:15:19.186 "type": "rebuild", 00:15:19.186 "target": "spare", 00:15:19.186 "progress": { 00:15:19.186 "blocks": 20480, 00:15:19.186 "percent": 32 00:15:19.186 } 00:15:19.186 }, 00:15:19.186 "base_bdevs_list": [ 00:15:19.186 { 00:15:19.186 "name": "spare", 00:15:19.186 "uuid": "3a2cb34c-b9b7-59f2-adcc-11a66ff0e77f", 00:15:19.186 "is_configured": true, 00:15:19.186 "data_offset": 2048, 00:15:19.186 "data_size": 63488 00:15:19.186 }, 00:15:19.186 { 00:15:19.186 "name": "BaseBdev2", 00:15:19.186 "uuid": "90f9d905-c152-55d4-b7a2-52ec389d75ed", 00:15:19.186 "is_configured": true, 00:15:19.186 "data_offset": 2048, 00:15:19.186 "data_size": 63488 00:15:19.186 } 00:15:19.186 ] 00:15:19.186 }' 00:15:19.186 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.186 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:19.186 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.186 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:19.186 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:19.186 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.186 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.186 [2024-11-05 16:29:32.270791] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:19.447 [2024-11-05 16:29:32.309134] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:19.447 [2024-11-05 16:29:32.309335] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.447 [2024-11-05 16:29:32.309356] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:19.447 [2024-11-05 16:29:32.309368] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:19.447 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.447 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:19.447 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.447 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.447 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:19.447 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:19.447 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:19.447 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.447 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.447 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.447 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.447 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.447 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.447 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.447 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.447 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.447 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.447 "name": "raid_bdev1", 00:15:19.447 "uuid": "8e744347-1bfc-473c-818d-581991d78f46", 00:15:19.447 "strip_size_kb": 0, 00:15:19.447 "state": "online", 00:15:19.447 "raid_level": "raid1", 00:15:19.447 "superblock": true, 00:15:19.447 "num_base_bdevs": 2, 00:15:19.447 "num_base_bdevs_discovered": 1, 00:15:19.447 "num_base_bdevs_operational": 1, 00:15:19.447 "base_bdevs_list": [ 00:15:19.447 { 00:15:19.447 "name": null, 00:15:19.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.447 "is_configured": false, 00:15:19.447 "data_offset": 0, 00:15:19.447 "data_size": 63488 00:15:19.447 }, 00:15:19.447 { 00:15:19.447 "name": "BaseBdev2", 00:15:19.447 "uuid": "90f9d905-c152-55d4-b7a2-52ec389d75ed", 00:15:19.447 "is_configured": true, 00:15:19.447 "data_offset": 2048, 00:15:19.447 "data_size": 63488 00:15:19.447 } 00:15:19.447 ] 00:15:19.447 }' 00:15:19.447 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.447 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.706 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:19.706 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.706 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:19.706 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:19.706 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.966 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.966 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.966 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.966 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.966 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.966 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.966 "name": "raid_bdev1", 00:15:19.966 "uuid": "8e744347-1bfc-473c-818d-581991d78f46", 00:15:19.966 "strip_size_kb": 0, 00:15:19.966 "state": "online", 00:15:19.966 "raid_level": "raid1", 00:15:19.966 "superblock": true, 00:15:19.966 "num_base_bdevs": 2, 00:15:19.966 "num_base_bdevs_discovered": 1, 00:15:19.966 "num_base_bdevs_operational": 1, 00:15:19.966 "base_bdevs_list": [ 00:15:19.966 { 00:15:19.966 "name": null, 00:15:19.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.966 "is_configured": false, 00:15:19.966 "data_offset": 0, 00:15:19.966 "data_size": 63488 00:15:19.966 }, 00:15:19.966 { 00:15:19.966 "name": "BaseBdev2", 00:15:19.966 "uuid": "90f9d905-c152-55d4-b7a2-52ec389d75ed", 00:15:19.966 "is_configured": true, 00:15:19.966 "data_offset": 2048, 00:15:19.966 "data_size": 63488 00:15:19.966 } 00:15:19.967 ] 00:15:19.967 }' 00:15:19.967 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.967 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:19.967 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.967 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:19.967 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:19.967 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.967 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.967 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.967 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:19.967 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.967 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.967 [2024-11-05 16:29:32.942188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:19.967 [2024-11-05 16:29:32.942287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.967 [2024-11-05 16:29:32.942312] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:19.967 [2024-11-05 16:29:32.942327] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.967 [2024-11-05 16:29:32.942813] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.967 [2024-11-05 16:29:32.942849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:19.967 [2024-11-05 16:29:32.942944] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:19.967 [2024-11-05 16:29:32.942966] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:19.967 [2024-11-05 16:29:32.942974] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:19.967 [2024-11-05 16:29:32.942987] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:19.967 BaseBdev1 00:15:19.967 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.967 16:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:20.906 16:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:20.906 16:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.906 16:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.906 16:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:20.906 16:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:20.906 16:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:20.906 16:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.906 16:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.906 16:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.906 16:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.906 16:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.906 16:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.906 16:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.906 16:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.906 16:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.165 16:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.165 "name": "raid_bdev1", 00:15:21.165 "uuid": "8e744347-1bfc-473c-818d-581991d78f46", 00:15:21.165 "strip_size_kb": 0, 00:15:21.165 "state": "online", 00:15:21.165 "raid_level": "raid1", 00:15:21.165 "superblock": true, 00:15:21.165 "num_base_bdevs": 2, 00:15:21.166 "num_base_bdevs_discovered": 1, 00:15:21.166 "num_base_bdevs_operational": 1, 00:15:21.166 "base_bdevs_list": [ 00:15:21.166 { 00:15:21.166 "name": null, 00:15:21.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.166 "is_configured": false, 00:15:21.166 "data_offset": 0, 00:15:21.166 "data_size": 63488 00:15:21.166 }, 00:15:21.166 { 00:15:21.166 "name": "BaseBdev2", 00:15:21.166 "uuid": "90f9d905-c152-55d4-b7a2-52ec389d75ed", 00:15:21.166 "is_configured": true, 00:15:21.166 "data_offset": 2048, 00:15:21.166 "data_size": 63488 00:15:21.166 } 00:15:21.166 ] 00:15:21.166 }' 00:15:21.166 16:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.166 16:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.425 16:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:21.425 16:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.425 16:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:21.425 16:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:21.425 16:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.425 16:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.425 16:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.425 16:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.425 16:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.425 16:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.425 16:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.425 "name": "raid_bdev1", 00:15:21.425 "uuid": "8e744347-1bfc-473c-818d-581991d78f46", 00:15:21.425 "strip_size_kb": 0, 00:15:21.425 "state": "online", 00:15:21.425 "raid_level": "raid1", 00:15:21.425 "superblock": true, 00:15:21.425 "num_base_bdevs": 2, 00:15:21.425 "num_base_bdevs_discovered": 1, 00:15:21.425 "num_base_bdevs_operational": 1, 00:15:21.425 "base_bdevs_list": [ 00:15:21.425 { 00:15:21.425 "name": null, 00:15:21.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.425 "is_configured": false, 00:15:21.425 "data_offset": 0, 00:15:21.425 "data_size": 63488 00:15:21.425 }, 00:15:21.425 { 00:15:21.425 "name": "BaseBdev2", 00:15:21.425 "uuid": "90f9d905-c152-55d4-b7a2-52ec389d75ed", 00:15:21.425 "is_configured": true, 00:15:21.425 "data_offset": 2048, 00:15:21.425 "data_size": 63488 00:15:21.425 } 00:15:21.425 ] 00:15:21.425 }' 00:15:21.425 16:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.685 16:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:21.685 16:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.685 16:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:21.685 16:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:21.685 16:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:15:21.685 16:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:21.685 16:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:21.685 16:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:21.685 16:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:21.685 16:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:21.685 16:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:21.685 16:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.685 16:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.685 [2024-11-05 16:29:34.587834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:21.685 [2024-11-05 16:29:34.588013] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:21.685 [2024-11-05 16:29:34.588025] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:21.685 request: 00:15:21.685 { 00:15:21.685 "base_bdev": "BaseBdev1", 00:15:21.685 "raid_bdev": "raid_bdev1", 00:15:21.685 "method": "bdev_raid_add_base_bdev", 00:15:21.685 "req_id": 1 00:15:21.685 } 00:15:21.685 Got JSON-RPC error response 00:15:21.685 response: 00:15:21.685 { 00:15:21.685 "code": -22, 00:15:21.685 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:21.685 } 00:15:21.685 16:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:21.685 16:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:15:21.685 16:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:21.685 16:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:21.685 16:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:21.685 16:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:22.624 16:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:22.624 16:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:22.624 16:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.624 16:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:22.624 16:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:22.624 16:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:22.624 16:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.624 16:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.624 16:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.624 16:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.624 16:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.624 16:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.624 16:29:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.624 16:29:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.624 16:29:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.624 16:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.624 "name": "raid_bdev1", 00:15:22.624 "uuid": "8e744347-1bfc-473c-818d-581991d78f46", 00:15:22.624 "strip_size_kb": 0, 00:15:22.624 "state": "online", 00:15:22.624 "raid_level": "raid1", 00:15:22.624 "superblock": true, 00:15:22.624 "num_base_bdevs": 2, 00:15:22.624 "num_base_bdevs_discovered": 1, 00:15:22.624 "num_base_bdevs_operational": 1, 00:15:22.624 "base_bdevs_list": [ 00:15:22.624 { 00:15:22.624 "name": null, 00:15:22.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.624 "is_configured": false, 00:15:22.624 "data_offset": 0, 00:15:22.624 "data_size": 63488 00:15:22.624 }, 00:15:22.624 { 00:15:22.624 "name": "BaseBdev2", 00:15:22.624 "uuid": "90f9d905-c152-55d4-b7a2-52ec389d75ed", 00:15:22.624 "is_configured": true, 00:15:22.624 "data_offset": 2048, 00:15:22.624 "data_size": 63488 00:15:22.624 } 00:15:22.624 ] 00:15:22.624 }' 00:15:22.624 16:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.624 16:29:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.192 16:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:23.192 16:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.192 16:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:23.192 16:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:23.192 16:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.192 16:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.192 16:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.192 16:29:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.192 16:29:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.192 16:29:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.192 16:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.192 "name": "raid_bdev1", 00:15:23.192 "uuid": "8e744347-1bfc-473c-818d-581991d78f46", 00:15:23.192 "strip_size_kb": 0, 00:15:23.192 "state": "online", 00:15:23.192 "raid_level": "raid1", 00:15:23.192 "superblock": true, 00:15:23.192 "num_base_bdevs": 2, 00:15:23.192 "num_base_bdevs_discovered": 1, 00:15:23.192 "num_base_bdevs_operational": 1, 00:15:23.192 "base_bdevs_list": [ 00:15:23.192 { 00:15:23.192 "name": null, 00:15:23.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.192 "is_configured": false, 00:15:23.192 "data_offset": 0, 00:15:23.192 "data_size": 63488 00:15:23.192 }, 00:15:23.192 { 00:15:23.192 "name": "BaseBdev2", 00:15:23.192 "uuid": "90f9d905-c152-55d4-b7a2-52ec389d75ed", 00:15:23.192 "is_configured": true, 00:15:23.192 "data_offset": 2048, 00:15:23.192 "data_size": 63488 00:15:23.192 } 00:15:23.192 ] 00:15:23.192 }' 00:15:23.192 16:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.192 16:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:23.192 16:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.192 16:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:23.192 16:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77195 00:15:23.192 16:29:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # '[' -z 77195 ']' 00:15:23.192 16:29:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # kill -0 77195 00:15:23.192 16:29:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # uname 00:15:23.192 16:29:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:23.192 16:29:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77195 00:15:23.192 16:29:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:23.192 16:29:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:23.192 killing process with pid 77195 00:15:23.192 Received shutdown signal, test time was about 17.403193 seconds 00:15:23.192 00:15:23.192 Latency(us) 00:15:23.192 [2024-11-05T16:29:36.280Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.192 [2024-11-05T16:29:36.280Z] =================================================================================================================== 00:15:23.192 [2024-11-05T16:29:36.280Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:23.192 16:29:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77195' 00:15:23.192 16:29:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # kill 77195 00:15:23.192 [2024-11-05 16:29:36.248549] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:23.192 [2024-11-05 16:29:36.248684] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:23.192 [2024-11-05 16:29:36.248745] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:23.192 [2024-11-05 16:29:36.248754] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:23.192 16:29:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@976 -- # wait 77195 00:15:23.451 [2024-11-05 16:29:36.478481] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:24.828 16:29:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:24.828 ************************************ 00:15:24.828 END TEST raid_rebuild_test_sb_io 00:15:24.828 ************************************ 00:15:24.828 00:15:24.828 real 0m20.638s 00:15:24.828 user 0m27.054s 00:15:24.828 sys 0m2.261s 00:15:24.828 16:29:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:24.828 16:29:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:24.828 16:29:37 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:15:24.828 16:29:37 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:15:24.828 16:29:37 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:15:24.828 16:29:37 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:24.828 16:29:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:24.828 ************************************ 00:15:24.828 START TEST raid_rebuild_test 00:15:24.828 ************************************ 00:15:24.828 16:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 false false true 00:15:24.828 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:24.828 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:24.828 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:24.828 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:24.828 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:24.828 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:24.828 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:24.828 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:24.828 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:24.828 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:24.828 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:24.828 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:24.828 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:24.828 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:24.828 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:24.828 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:24.828 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:24.828 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:24.828 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:24.828 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:24.828 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:24.828 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:24.828 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:24.828 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:24.828 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:24.828 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:24.828 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:24.828 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:24.828 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:24.828 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77885 00:15:24.828 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:24.828 16:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77885 00:15:24.829 16:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 77885 ']' 00:15:24.829 16:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.829 16:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:24.829 16:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.829 16:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:24.829 16:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.829 [2024-11-05 16:29:37.861778] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:15:24.829 [2024-11-05 16:29:37.861994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77885 ] 00:15:24.829 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:24.829 Zero copy mechanism will not be used. 00:15:25.089 [2024-11-05 16:29:38.040850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.089 [2024-11-05 16:29:38.161135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.348 [2024-11-05 16:29:38.374864] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:25.348 [2024-11-05 16:29:38.375029] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:25.918 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:25.918 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:15:25.919 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:25.919 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:25.919 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.919 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.919 BaseBdev1_malloc 00:15:25.919 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.919 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:25.919 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.919 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.919 [2024-11-05 16:29:38.770577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:25.919 [2024-11-05 16:29:38.770644] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.919 [2024-11-05 16:29:38.770670] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:25.919 [2024-11-05 16:29:38.770681] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.919 [2024-11-05 16:29:38.772961] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.919 [2024-11-05 16:29:38.773002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:25.919 BaseBdev1 00:15:25.919 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.919 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:25.919 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:25.919 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.919 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.919 BaseBdev2_malloc 00:15:25.919 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.919 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:25.919 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.919 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.919 [2024-11-05 16:29:38.828990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:25.919 [2024-11-05 16:29:38.829066] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.919 [2024-11-05 16:29:38.829090] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:25.919 [2024-11-05 16:29:38.829102] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.919 [2024-11-05 16:29:38.831421] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.919 [2024-11-05 16:29:38.831462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:25.919 BaseBdev2 00:15:25.919 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.919 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:25.919 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:25.919 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.919 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.919 BaseBdev3_malloc 00:15:25.919 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.919 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:25.919 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.919 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.919 [2024-11-05 16:29:38.899167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:25.919 [2024-11-05 16:29:38.899299] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.919 [2024-11-05 16:29:38.899347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:25.919 [2024-11-05 16:29:38.899361] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.919 [2024-11-05 16:29:38.901897] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.919 [2024-11-05 16:29:38.901945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:25.919 BaseBdev3 00:15:25.919 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.919 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:25.919 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:25.919 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.919 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.919 BaseBdev4_malloc 00:15:25.919 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.919 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:25.919 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.919 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.919 [2024-11-05 16:29:38.961934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:25.919 [2024-11-05 16:29:38.962051] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.919 [2024-11-05 16:29:38.962080] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:25.919 [2024-11-05 16:29:38.962092] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.919 [2024-11-05 16:29:38.964595] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.919 [2024-11-05 16:29:38.964639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:25.919 BaseBdev4 00:15:25.919 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.919 16:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:25.919 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.919 16:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.179 spare_malloc 00:15:26.179 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.179 16:29:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:26.179 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.179 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.179 spare_delay 00:15:26.179 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.179 16:29:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:26.179 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.179 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.179 [2024-11-05 16:29:39.035261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:26.179 [2024-11-05 16:29:39.035426] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.179 [2024-11-05 16:29:39.035460] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:26.179 [2024-11-05 16:29:39.035473] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.179 [2024-11-05 16:29:39.038081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.179 [2024-11-05 16:29:39.038129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:26.179 spare 00:15:26.179 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.179 16:29:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:26.179 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.179 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.179 [2024-11-05 16:29:39.047309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:26.179 [2024-11-05 16:29:39.049476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:26.179 [2024-11-05 16:29:39.049574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:26.179 [2024-11-05 16:29:39.049655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:26.179 [2024-11-05 16:29:39.049775] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:26.179 [2024-11-05 16:29:39.049788] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:26.179 [2024-11-05 16:29:39.050113] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:26.179 [2024-11-05 16:29:39.050314] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:26.179 [2024-11-05 16:29:39.050328] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:26.179 [2024-11-05 16:29:39.050559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.179 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.179 16:29:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:26.179 16:29:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:26.179 16:29:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.179 16:29:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:26.179 16:29:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:26.179 16:29:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:26.179 16:29:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.179 16:29:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.179 16:29:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.179 16:29:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.179 16:29:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.179 16:29:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.179 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.179 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.179 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.179 16:29:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.179 "name": "raid_bdev1", 00:15:26.179 "uuid": "3a4af298-54d8-4741-bc72-94b0f388a964", 00:15:26.179 "strip_size_kb": 0, 00:15:26.179 "state": "online", 00:15:26.179 "raid_level": "raid1", 00:15:26.179 "superblock": false, 00:15:26.179 "num_base_bdevs": 4, 00:15:26.179 "num_base_bdevs_discovered": 4, 00:15:26.179 "num_base_bdevs_operational": 4, 00:15:26.179 "base_bdevs_list": [ 00:15:26.179 { 00:15:26.179 "name": "BaseBdev1", 00:15:26.179 "uuid": "c3b8bf3c-5c64-5cd0-8182-ab741d2c9750", 00:15:26.179 "is_configured": true, 00:15:26.179 "data_offset": 0, 00:15:26.179 "data_size": 65536 00:15:26.179 }, 00:15:26.179 { 00:15:26.179 "name": "BaseBdev2", 00:15:26.179 "uuid": "95623df9-3bf1-5edf-ae51-6d6b06344f53", 00:15:26.179 "is_configured": true, 00:15:26.179 "data_offset": 0, 00:15:26.179 "data_size": 65536 00:15:26.179 }, 00:15:26.179 { 00:15:26.179 "name": "BaseBdev3", 00:15:26.179 "uuid": "71e2d87d-b056-5a0a-9f0a-ab05cb22e23f", 00:15:26.179 "is_configured": true, 00:15:26.179 "data_offset": 0, 00:15:26.179 "data_size": 65536 00:15:26.179 }, 00:15:26.179 { 00:15:26.179 "name": "BaseBdev4", 00:15:26.179 "uuid": "856aca7a-dc76-5fe3-96a4-5ba3750578c6", 00:15:26.179 "is_configured": true, 00:15:26.179 "data_offset": 0, 00:15:26.179 "data_size": 65536 00:15:26.179 } 00:15:26.179 ] 00:15:26.179 }' 00:15:26.179 16:29:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.180 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.439 16:29:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:26.439 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.439 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.439 16:29:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:26.439 [2024-11-05 16:29:39.502911] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:26.439 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.696 16:29:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:26.696 16:29:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.696 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.696 16:29:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:26.696 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.696 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.696 16:29:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:26.696 16:29:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:26.696 16:29:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:26.696 16:29:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:26.696 16:29:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:26.696 16:29:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:26.696 16:29:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:26.696 16:29:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:26.696 16:29:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:26.696 16:29:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:26.696 16:29:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:26.696 16:29:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:26.696 16:29:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:26.696 16:29:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:26.955 [2024-11-05 16:29:39.798115] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:26.955 /dev/nbd0 00:15:26.955 16:29:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:26.955 16:29:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:26.955 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:26.955 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:15:26.955 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:26.955 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:26.955 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:26.955 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:15:26.955 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:26.955 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:26.955 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:26.955 1+0 records in 00:15:26.955 1+0 records out 00:15:26.955 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000503758 s, 8.1 MB/s 00:15:26.955 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:26.955 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:15:26.955 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:26.955 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:26.955 16:29:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:15:26.955 16:29:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:26.955 16:29:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:26.955 16:29:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:26.955 16:29:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:26.955 16:29:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:15:33.569 65536+0 records in 00:15:33.569 65536+0 records out 00:15:33.569 33554432 bytes (34 MB, 32 MiB) copied, 5.52061 s, 6.1 MB/s 00:15:33.569 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:33.569 16:29:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:33.569 16:29:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:33.569 16:29:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:33.569 16:29:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:33.569 16:29:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:33.569 16:29:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:33.569 16:29:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:33.569 [2024-11-05 16:29:45.620887] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.569 16:29:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:33.569 16:29:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:33.569 16:29:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:33.569 16:29:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:33.569 16:29:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:33.569 16:29:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:33.569 16:29:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:33.569 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:33.569 16:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.569 16:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.569 [2024-11-05 16:29:45.633981] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:33.569 16:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.569 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:33.569 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.569 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.569 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:33.569 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:33.569 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:33.569 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.569 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.569 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.569 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.569 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.569 16:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.569 16:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.569 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.569 16:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.569 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.569 "name": "raid_bdev1", 00:15:33.569 "uuid": "3a4af298-54d8-4741-bc72-94b0f388a964", 00:15:33.569 "strip_size_kb": 0, 00:15:33.569 "state": "online", 00:15:33.569 "raid_level": "raid1", 00:15:33.569 "superblock": false, 00:15:33.569 "num_base_bdevs": 4, 00:15:33.569 "num_base_bdevs_discovered": 3, 00:15:33.569 "num_base_bdevs_operational": 3, 00:15:33.569 "base_bdevs_list": [ 00:15:33.569 { 00:15:33.569 "name": null, 00:15:33.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.569 "is_configured": false, 00:15:33.569 "data_offset": 0, 00:15:33.569 "data_size": 65536 00:15:33.569 }, 00:15:33.569 { 00:15:33.569 "name": "BaseBdev2", 00:15:33.569 "uuid": "95623df9-3bf1-5edf-ae51-6d6b06344f53", 00:15:33.569 "is_configured": true, 00:15:33.569 "data_offset": 0, 00:15:33.569 "data_size": 65536 00:15:33.569 }, 00:15:33.569 { 00:15:33.569 "name": "BaseBdev3", 00:15:33.569 "uuid": "71e2d87d-b056-5a0a-9f0a-ab05cb22e23f", 00:15:33.569 "is_configured": true, 00:15:33.569 "data_offset": 0, 00:15:33.569 "data_size": 65536 00:15:33.569 }, 00:15:33.569 { 00:15:33.569 "name": "BaseBdev4", 00:15:33.569 "uuid": "856aca7a-dc76-5fe3-96a4-5ba3750578c6", 00:15:33.569 "is_configured": true, 00:15:33.569 "data_offset": 0, 00:15:33.569 "data_size": 65536 00:15:33.569 } 00:15:33.569 ] 00:15:33.569 }' 00:15:33.569 16:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.569 16:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.569 16:29:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:33.569 16:29:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.569 16:29:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.569 [2024-11-05 16:29:46.069282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:33.569 [2024-11-05 16:29:46.087091] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:15:33.569 16:29:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.569 16:29:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:33.569 [2024-11-05 16:29:46.089198] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:34.137 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:34.137 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.137 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:34.137 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:34.137 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:34.137 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.137 16:29:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.137 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.137 16:29:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.137 16:29:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.137 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.137 "name": "raid_bdev1", 00:15:34.137 "uuid": "3a4af298-54d8-4741-bc72-94b0f388a964", 00:15:34.137 "strip_size_kb": 0, 00:15:34.137 "state": "online", 00:15:34.137 "raid_level": "raid1", 00:15:34.137 "superblock": false, 00:15:34.137 "num_base_bdevs": 4, 00:15:34.137 "num_base_bdevs_discovered": 4, 00:15:34.137 "num_base_bdevs_operational": 4, 00:15:34.137 "process": { 00:15:34.137 "type": "rebuild", 00:15:34.137 "target": "spare", 00:15:34.137 "progress": { 00:15:34.137 "blocks": 20480, 00:15:34.137 "percent": 31 00:15:34.137 } 00:15:34.137 }, 00:15:34.137 "base_bdevs_list": [ 00:15:34.137 { 00:15:34.137 "name": "spare", 00:15:34.137 "uuid": "57463b2f-5f66-5970-8e87-a76c4529897f", 00:15:34.137 "is_configured": true, 00:15:34.137 "data_offset": 0, 00:15:34.137 "data_size": 65536 00:15:34.137 }, 00:15:34.137 { 00:15:34.137 "name": "BaseBdev2", 00:15:34.137 "uuid": "95623df9-3bf1-5edf-ae51-6d6b06344f53", 00:15:34.137 "is_configured": true, 00:15:34.137 "data_offset": 0, 00:15:34.137 "data_size": 65536 00:15:34.137 }, 00:15:34.137 { 00:15:34.137 "name": "BaseBdev3", 00:15:34.137 "uuid": "71e2d87d-b056-5a0a-9f0a-ab05cb22e23f", 00:15:34.137 "is_configured": true, 00:15:34.137 "data_offset": 0, 00:15:34.137 "data_size": 65536 00:15:34.137 }, 00:15:34.137 { 00:15:34.137 "name": "BaseBdev4", 00:15:34.137 "uuid": "856aca7a-dc76-5fe3-96a4-5ba3750578c6", 00:15:34.137 "is_configured": true, 00:15:34.137 "data_offset": 0, 00:15:34.137 "data_size": 65536 00:15:34.137 } 00:15:34.137 ] 00:15:34.137 }' 00:15:34.137 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.137 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:34.137 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.397 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:34.397 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:34.397 16:29:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.397 16:29:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.397 [2024-11-05 16:29:47.244226] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:34.397 [2024-11-05 16:29:47.295088] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:34.397 [2024-11-05 16:29:47.295164] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.397 [2024-11-05 16:29:47.295182] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:34.397 [2024-11-05 16:29:47.295192] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:34.397 16:29:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.397 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:34.397 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.397 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.397 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:34.397 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:34.397 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:34.397 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.397 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.397 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.397 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.397 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.397 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.397 16:29:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.397 16:29:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.397 16:29:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.397 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.397 "name": "raid_bdev1", 00:15:34.397 "uuid": "3a4af298-54d8-4741-bc72-94b0f388a964", 00:15:34.397 "strip_size_kb": 0, 00:15:34.397 "state": "online", 00:15:34.397 "raid_level": "raid1", 00:15:34.397 "superblock": false, 00:15:34.397 "num_base_bdevs": 4, 00:15:34.397 "num_base_bdevs_discovered": 3, 00:15:34.397 "num_base_bdevs_operational": 3, 00:15:34.397 "base_bdevs_list": [ 00:15:34.397 { 00:15:34.397 "name": null, 00:15:34.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.397 "is_configured": false, 00:15:34.397 "data_offset": 0, 00:15:34.397 "data_size": 65536 00:15:34.397 }, 00:15:34.397 { 00:15:34.397 "name": "BaseBdev2", 00:15:34.397 "uuid": "95623df9-3bf1-5edf-ae51-6d6b06344f53", 00:15:34.397 "is_configured": true, 00:15:34.397 "data_offset": 0, 00:15:34.397 "data_size": 65536 00:15:34.397 }, 00:15:34.397 { 00:15:34.397 "name": "BaseBdev3", 00:15:34.397 "uuid": "71e2d87d-b056-5a0a-9f0a-ab05cb22e23f", 00:15:34.397 "is_configured": true, 00:15:34.397 "data_offset": 0, 00:15:34.397 "data_size": 65536 00:15:34.397 }, 00:15:34.397 { 00:15:34.397 "name": "BaseBdev4", 00:15:34.397 "uuid": "856aca7a-dc76-5fe3-96a4-5ba3750578c6", 00:15:34.397 "is_configured": true, 00:15:34.397 "data_offset": 0, 00:15:34.397 "data_size": 65536 00:15:34.397 } 00:15:34.397 ] 00:15:34.397 }' 00:15:34.397 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.397 16:29:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.965 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:34.965 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.965 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:34.965 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:34.965 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:34.965 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.965 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.965 16:29:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.965 16:29:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.965 16:29:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.965 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.965 "name": "raid_bdev1", 00:15:34.965 "uuid": "3a4af298-54d8-4741-bc72-94b0f388a964", 00:15:34.965 "strip_size_kb": 0, 00:15:34.965 "state": "online", 00:15:34.965 "raid_level": "raid1", 00:15:34.965 "superblock": false, 00:15:34.965 "num_base_bdevs": 4, 00:15:34.965 "num_base_bdevs_discovered": 3, 00:15:34.965 "num_base_bdevs_operational": 3, 00:15:34.965 "base_bdevs_list": [ 00:15:34.965 { 00:15:34.965 "name": null, 00:15:34.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.965 "is_configured": false, 00:15:34.965 "data_offset": 0, 00:15:34.965 "data_size": 65536 00:15:34.965 }, 00:15:34.965 { 00:15:34.965 "name": "BaseBdev2", 00:15:34.965 "uuid": "95623df9-3bf1-5edf-ae51-6d6b06344f53", 00:15:34.965 "is_configured": true, 00:15:34.965 "data_offset": 0, 00:15:34.965 "data_size": 65536 00:15:34.965 }, 00:15:34.965 { 00:15:34.965 "name": "BaseBdev3", 00:15:34.965 "uuid": "71e2d87d-b056-5a0a-9f0a-ab05cb22e23f", 00:15:34.965 "is_configured": true, 00:15:34.965 "data_offset": 0, 00:15:34.965 "data_size": 65536 00:15:34.965 }, 00:15:34.965 { 00:15:34.965 "name": "BaseBdev4", 00:15:34.965 "uuid": "856aca7a-dc76-5fe3-96a4-5ba3750578c6", 00:15:34.965 "is_configured": true, 00:15:34.965 "data_offset": 0, 00:15:34.965 "data_size": 65536 00:15:34.965 } 00:15:34.965 ] 00:15:34.965 }' 00:15:34.965 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.965 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:34.965 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.965 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:34.965 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:34.965 16:29:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.965 16:29:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.965 [2024-11-05 16:29:47.976679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:34.965 [2024-11-05 16:29:47.993029] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:15:34.965 16:29:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.965 16:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:34.965 [2024-11-05 16:29:47.995125] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:36.342 16:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:36.342 16:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.342 16:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:36.342 16:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:36.342 16:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.342 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.342 16:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.342 16:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.342 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.342 16:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.342 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.342 "name": "raid_bdev1", 00:15:36.342 "uuid": "3a4af298-54d8-4741-bc72-94b0f388a964", 00:15:36.342 "strip_size_kb": 0, 00:15:36.342 "state": "online", 00:15:36.342 "raid_level": "raid1", 00:15:36.342 "superblock": false, 00:15:36.342 "num_base_bdevs": 4, 00:15:36.342 "num_base_bdevs_discovered": 4, 00:15:36.342 "num_base_bdevs_operational": 4, 00:15:36.342 "process": { 00:15:36.342 "type": "rebuild", 00:15:36.342 "target": "spare", 00:15:36.342 "progress": { 00:15:36.342 "blocks": 20480, 00:15:36.342 "percent": 31 00:15:36.342 } 00:15:36.342 }, 00:15:36.342 "base_bdevs_list": [ 00:15:36.342 { 00:15:36.342 "name": "spare", 00:15:36.342 "uuid": "57463b2f-5f66-5970-8e87-a76c4529897f", 00:15:36.342 "is_configured": true, 00:15:36.342 "data_offset": 0, 00:15:36.342 "data_size": 65536 00:15:36.342 }, 00:15:36.342 { 00:15:36.342 "name": "BaseBdev2", 00:15:36.342 "uuid": "95623df9-3bf1-5edf-ae51-6d6b06344f53", 00:15:36.342 "is_configured": true, 00:15:36.342 "data_offset": 0, 00:15:36.342 "data_size": 65536 00:15:36.342 }, 00:15:36.342 { 00:15:36.342 "name": "BaseBdev3", 00:15:36.342 "uuid": "71e2d87d-b056-5a0a-9f0a-ab05cb22e23f", 00:15:36.342 "is_configured": true, 00:15:36.342 "data_offset": 0, 00:15:36.342 "data_size": 65536 00:15:36.342 }, 00:15:36.342 { 00:15:36.342 "name": "BaseBdev4", 00:15:36.342 "uuid": "856aca7a-dc76-5fe3-96a4-5ba3750578c6", 00:15:36.342 "is_configured": true, 00:15:36.342 "data_offset": 0, 00:15:36.342 "data_size": 65536 00:15:36.342 } 00:15:36.342 ] 00:15:36.342 }' 00:15:36.342 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.342 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:36.342 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.342 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:36.342 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:36.342 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:36.343 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:36.343 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:36.343 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:36.343 16:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.343 16:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.343 [2024-11-05 16:29:49.174707] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:36.343 [2024-11-05 16:29:49.201207] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:15:36.343 16:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.343 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:36.343 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:36.343 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:36.343 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.343 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:36.343 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:36.343 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.343 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.343 16:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.343 16:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.343 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.343 16:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.343 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.343 "name": "raid_bdev1", 00:15:36.343 "uuid": "3a4af298-54d8-4741-bc72-94b0f388a964", 00:15:36.343 "strip_size_kb": 0, 00:15:36.343 "state": "online", 00:15:36.343 "raid_level": "raid1", 00:15:36.343 "superblock": false, 00:15:36.343 "num_base_bdevs": 4, 00:15:36.343 "num_base_bdevs_discovered": 3, 00:15:36.343 "num_base_bdevs_operational": 3, 00:15:36.343 "process": { 00:15:36.343 "type": "rebuild", 00:15:36.343 "target": "spare", 00:15:36.343 "progress": { 00:15:36.343 "blocks": 24576, 00:15:36.343 "percent": 37 00:15:36.343 } 00:15:36.343 }, 00:15:36.343 "base_bdevs_list": [ 00:15:36.343 { 00:15:36.343 "name": "spare", 00:15:36.343 "uuid": "57463b2f-5f66-5970-8e87-a76c4529897f", 00:15:36.343 "is_configured": true, 00:15:36.343 "data_offset": 0, 00:15:36.343 "data_size": 65536 00:15:36.343 }, 00:15:36.343 { 00:15:36.343 "name": null, 00:15:36.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.343 "is_configured": false, 00:15:36.343 "data_offset": 0, 00:15:36.343 "data_size": 65536 00:15:36.343 }, 00:15:36.343 { 00:15:36.343 "name": "BaseBdev3", 00:15:36.343 "uuid": "71e2d87d-b056-5a0a-9f0a-ab05cb22e23f", 00:15:36.343 "is_configured": true, 00:15:36.343 "data_offset": 0, 00:15:36.343 "data_size": 65536 00:15:36.343 }, 00:15:36.343 { 00:15:36.343 "name": "BaseBdev4", 00:15:36.343 "uuid": "856aca7a-dc76-5fe3-96a4-5ba3750578c6", 00:15:36.343 "is_configured": true, 00:15:36.343 "data_offset": 0, 00:15:36.343 "data_size": 65536 00:15:36.343 } 00:15:36.343 ] 00:15:36.343 }' 00:15:36.343 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.343 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:36.343 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.343 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:36.343 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=461 00:15:36.343 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:36.343 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:36.343 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.343 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:36.343 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:36.343 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.343 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.343 16:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.343 16:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.343 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.343 16:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.343 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.343 "name": "raid_bdev1", 00:15:36.343 "uuid": "3a4af298-54d8-4741-bc72-94b0f388a964", 00:15:36.343 "strip_size_kb": 0, 00:15:36.343 "state": "online", 00:15:36.343 "raid_level": "raid1", 00:15:36.343 "superblock": false, 00:15:36.343 "num_base_bdevs": 4, 00:15:36.343 "num_base_bdevs_discovered": 3, 00:15:36.343 "num_base_bdevs_operational": 3, 00:15:36.343 "process": { 00:15:36.343 "type": "rebuild", 00:15:36.343 "target": "spare", 00:15:36.343 "progress": { 00:15:36.343 "blocks": 26624, 00:15:36.343 "percent": 40 00:15:36.343 } 00:15:36.343 }, 00:15:36.343 "base_bdevs_list": [ 00:15:36.343 { 00:15:36.343 "name": "spare", 00:15:36.343 "uuid": "57463b2f-5f66-5970-8e87-a76c4529897f", 00:15:36.343 "is_configured": true, 00:15:36.343 "data_offset": 0, 00:15:36.343 "data_size": 65536 00:15:36.343 }, 00:15:36.343 { 00:15:36.343 "name": null, 00:15:36.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.343 "is_configured": false, 00:15:36.343 "data_offset": 0, 00:15:36.343 "data_size": 65536 00:15:36.343 }, 00:15:36.343 { 00:15:36.343 "name": "BaseBdev3", 00:15:36.343 "uuid": "71e2d87d-b056-5a0a-9f0a-ab05cb22e23f", 00:15:36.343 "is_configured": true, 00:15:36.343 "data_offset": 0, 00:15:36.343 "data_size": 65536 00:15:36.343 }, 00:15:36.343 { 00:15:36.343 "name": "BaseBdev4", 00:15:36.343 "uuid": "856aca7a-dc76-5fe3-96a4-5ba3750578c6", 00:15:36.343 "is_configured": true, 00:15:36.343 "data_offset": 0, 00:15:36.343 "data_size": 65536 00:15:36.343 } 00:15:36.343 ] 00:15:36.343 }' 00:15:36.343 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.343 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:36.343 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.343 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:36.343 16:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:37.719 16:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:37.719 16:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:37.719 16:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.719 16:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:37.719 16:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:37.719 16:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.719 16:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.719 16:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.719 16:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.719 16:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.719 16:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.719 16:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.719 "name": "raid_bdev1", 00:15:37.719 "uuid": "3a4af298-54d8-4741-bc72-94b0f388a964", 00:15:37.719 "strip_size_kb": 0, 00:15:37.719 "state": "online", 00:15:37.719 "raid_level": "raid1", 00:15:37.719 "superblock": false, 00:15:37.720 "num_base_bdevs": 4, 00:15:37.720 "num_base_bdevs_discovered": 3, 00:15:37.720 "num_base_bdevs_operational": 3, 00:15:37.720 "process": { 00:15:37.720 "type": "rebuild", 00:15:37.720 "target": "spare", 00:15:37.720 "progress": { 00:15:37.720 "blocks": 49152, 00:15:37.720 "percent": 75 00:15:37.720 } 00:15:37.720 }, 00:15:37.720 "base_bdevs_list": [ 00:15:37.720 { 00:15:37.720 "name": "spare", 00:15:37.720 "uuid": "57463b2f-5f66-5970-8e87-a76c4529897f", 00:15:37.720 "is_configured": true, 00:15:37.720 "data_offset": 0, 00:15:37.720 "data_size": 65536 00:15:37.720 }, 00:15:37.720 { 00:15:37.720 "name": null, 00:15:37.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.720 "is_configured": false, 00:15:37.720 "data_offset": 0, 00:15:37.720 "data_size": 65536 00:15:37.720 }, 00:15:37.720 { 00:15:37.720 "name": "BaseBdev3", 00:15:37.720 "uuid": "71e2d87d-b056-5a0a-9f0a-ab05cb22e23f", 00:15:37.720 "is_configured": true, 00:15:37.720 "data_offset": 0, 00:15:37.720 "data_size": 65536 00:15:37.720 }, 00:15:37.720 { 00:15:37.720 "name": "BaseBdev4", 00:15:37.720 "uuid": "856aca7a-dc76-5fe3-96a4-5ba3750578c6", 00:15:37.720 "is_configured": true, 00:15:37.720 "data_offset": 0, 00:15:37.720 "data_size": 65536 00:15:37.720 } 00:15:37.720 ] 00:15:37.720 }' 00:15:37.720 16:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.720 16:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:37.720 16:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.720 16:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:37.720 16:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:38.286 [2024-11-05 16:29:51.211611] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:38.286 [2024-11-05 16:29:51.211724] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:38.286 [2024-11-05 16:29:51.211785] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:38.545 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:38.545 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:38.545 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.545 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:38.545 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:38.545 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.545 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.545 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.545 16:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.545 16:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.545 16:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.545 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.545 "name": "raid_bdev1", 00:15:38.545 "uuid": "3a4af298-54d8-4741-bc72-94b0f388a964", 00:15:38.545 "strip_size_kb": 0, 00:15:38.545 "state": "online", 00:15:38.545 "raid_level": "raid1", 00:15:38.545 "superblock": false, 00:15:38.545 "num_base_bdevs": 4, 00:15:38.545 "num_base_bdevs_discovered": 3, 00:15:38.545 "num_base_bdevs_operational": 3, 00:15:38.545 "base_bdevs_list": [ 00:15:38.545 { 00:15:38.545 "name": "spare", 00:15:38.545 "uuid": "57463b2f-5f66-5970-8e87-a76c4529897f", 00:15:38.545 "is_configured": true, 00:15:38.545 "data_offset": 0, 00:15:38.545 "data_size": 65536 00:15:38.545 }, 00:15:38.545 { 00:15:38.545 "name": null, 00:15:38.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.545 "is_configured": false, 00:15:38.545 "data_offset": 0, 00:15:38.545 "data_size": 65536 00:15:38.545 }, 00:15:38.545 { 00:15:38.545 "name": "BaseBdev3", 00:15:38.545 "uuid": "71e2d87d-b056-5a0a-9f0a-ab05cb22e23f", 00:15:38.546 "is_configured": true, 00:15:38.546 "data_offset": 0, 00:15:38.546 "data_size": 65536 00:15:38.546 }, 00:15:38.546 { 00:15:38.546 "name": "BaseBdev4", 00:15:38.546 "uuid": "856aca7a-dc76-5fe3-96a4-5ba3750578c6", 00:15:38.546 "is_configured": true, 00:15:38.546 "data_offset": 0, 00:15:38.546 "data_size": 65536 00:15:38.546 } 00:15:38.546 ] 00:15:38.546 }' 00:15:38.546 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.805 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:38.805 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.805 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:38.805 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:38.805 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:38.805 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.805 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:38.805 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:38.805 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.805 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.805 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.805 16:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.805 16:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.805 16:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.805 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.805 "name": "raid_bdev1", 00:15:38.805 "uuid": "3a4af298-54d8-4741-bc72-94b0f388a964", 00:15:38.805 "strip_size_kb": 0, 00:15:38.805 "state": "online", 00:15:38.805 "raid_level": "raid1", 00:15:38.805 "superblock": false, 00:15:38.805 "num_base_bdevs": 4, 00:15:38.805 "num_base_bdevs_discovered": 3, 00:15:38.805 "num_base_bdevs_operational": 3, 00:15:38.805 "base_bdevs_list": [ 00:15:38.805 { 00:15:38.805 "name": "spare", 00:15:38.805 "uuid": "57463b2f-5f66-5970-8e87-a76c4529897f", 00:15:38.805 "is_configured": true, 00:15:38.805 "data_offset": 0, 00:15:38.805 "data_size": 65536 00:15:38.805 }, 00:15:38.805 { 00:15:38.805 "name": null, 00:15:38.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.805 "is_configured": false, 00:15:38.805 "data_offset": 0, 00:15:38.805 "data_size": 65536 00:15:38.805 }, 00:15:38.805 { 00:15:38.805 "name": "BaseBdev3", 00:15:38.805 "uuid": "71e2d87d-b056-5a0a-9f0a-ab05cb22e23f", 00:15:38.805 "is_configured": true, 00:15:38.805 "data_offset": 0, 00:15:38.805 "data_size": 65536 00:15:38.805 }, 00:15:38.805 { 00:15:38.805 "name": "BaseBdev4", 00:15:38.805 "uuid": "856aca7a-dc76-5fe3-96a4-5ba3750578c6", 00:15:38.805 "is_configured": true, 00:15:38.805 "data_offset": 0, 00:15:38.805 "data_size": 65536 00:15:38.805 } 00:15:38.805 ] 00:15:38.805 }' 00:15:38.805 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.805 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:38.805 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.805 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:38.805 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:38.805 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:38.805 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:38.805 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:38.805 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:38.805 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:38.805 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.805 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.805 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.805 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.805 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.805 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.805 16:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.805 16:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.805 16:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.063 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.063 "name": "raid_bdev1", 00:15:39.063 "uuid": "3a4af298-54d8-4741-bc72-94b0f388a964", 00:15:39.063 "strip_size_kb": 0, 00:15:39.063 "state": "online", 00:15:39.063 "raid_level": "raid1", 00:15:39.063 "superblock": false, 00:15:39.063 "num_base_bdevs": 4, 00:15:39.063 "num_base_bdevs_discovered": 3, 00:15:39.063 "num_base_bdevs_operational": 3, 00:15:39.063 "base_bdevs_list": [ 00:15:39.063 { 00:15:39.063 "name": "spare", 00:15:39.063 "uuid": "57463b2f-5f66-5970-8e87-a76c4529897f", 00:15:39.063 "is_configured": true, 00:15:39.063 "data_offset": 0, 00:15:39.063 "data_size": 65536 00:15:39.063 }, 00:15:39.063 { 00:15:39.063 "name": null, 00:15:39.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.063 "is_configured": false, 00:15:39.063 "data_offset": 0, 00:15:39.063 "data_size": 65536 00:15:39.063 }, 00:15:39.063 { 00:15:39.063 "name": "BaseBdev3", 00:15:39.063 "uuid": "71e2d87d-b056-5a0a-9f0a-ab05cb22e23f", 00:15:39.063 "is_configured": true, 00:15:39.063 "data_offset": 0, 00:15:39.063 "data_size": 65536 00:15:39.063 }, 00:15:39.063 { 00:15:39.063 "name": "BaseBdev4", 00:15:39.063 "uuid": "856aca7a-dc76-5fe3-96a4-5ba3750578c6", 00:15:39.063 "is_configured": true, 00:15:39.063 "data_offset": 0, 00:15:39.063 "data_size": 65536 00:15:39.063 } 00:15:39.063 ] 00:15:39.063 }' 00:15:39.063 16:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.063 16:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.321 16:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:39.321 16:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.321 16:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.321 [2024-11-05 16:29:52.324619] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:39.321 [2024-11-05 16:29:52.324716] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:39.321 [2024-11-05 16:29:52.324825] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:39.321 [2024-11-05 16:29:52.324929] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:39.322 [2024-11-05 16:29:52.325007] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:39.322 16:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.322 16:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.322 16:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:39.322 16:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.322 16:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.322 16:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.322 16:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:39.322 16:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:39.322 16:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:39.322 16:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:39.322 16:29:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:39.322 16:29:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:39.322 16:29:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:39.322 16:29:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:39.322 16:29:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:39.322 16:29:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:39.322 16:29:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:39.322 16:29:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:39.322 16:29:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:39.590 /dev/nbd0 00:15:39.591 16:29:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:39.591 16:29:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:39.591 16:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:39.591 16:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:15:39.591 16:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:39.591 16:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:39.591 16:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:39.591 16:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:15:39.591 16:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:39.591 16:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:39.591 16:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:39.591 1+0 records in 00:15:39.591 1+0 records out 00:15:39.591 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000443743 s, 9.2 MB/s 00:15:39.591 16:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:39.591 16:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:15:39.591 16:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:39.591 16:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:39.591 16:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:15:39.591 16:29:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:39.591 16:29:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:39.591 16:29:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:39.852 /dev/nbd1 00:15:39.852 16:29:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:39.852 16:29:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:39.852 16:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:39.852 16:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:15:39.852 16:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:39.852 16:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:39.852 16:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:39.852 16:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:15:39.852 16:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:39.852 16:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:39.852 16:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:39.852 1+0 records in 00:15:39.852 1+0 records out 00:15:39.852 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036937 s, 11.1 MB/s 00:15:39.852 16:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:39.852 16:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:15:39.852 16:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:39.852 16:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:39.852 16:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:15:39.852 16:29:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:39.852 16:29:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:39.852 16:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:40.112 16:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:40.112 16:29:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:40.112 16:29:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:40.112 16:29:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:40.112 16:29:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:40.112 16:29:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.112 16:29:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:40.371 16:29:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:40.371 16:29:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:40.371 16:29:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:40.371 16:29:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.371 16:29:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.371 16:29:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:40.371 16:29:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:40.372 16:29:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.372 16:29:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.372 16:29:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:40.631 16:29:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:40.631 16:29:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:40.631 16:29:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:40.631 16:29:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.631 16:29:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.631 16:29:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:40.631 16:29:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:40.631 16:29:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.631 16:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:40.631 16:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77885 00:15:40.631 16:29:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 77885 ']' 00:15:40.632 16:29:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 77885 00:15:40.632 16:29:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:15:40.632 16:29:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:40.632 16:29:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77885 00:15:40.632 killing process with pid 77885 00:15:40.632 Received shutdown signal, test time was about 60.000000 seconds 00:15:40.632 00:15:40.632 Latency(us) 00:15:40.632 [2024-11-05T16:29:53.720Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:40.632 [2024-11-05T16:29:53.720Z] =================================================================================================================== 00:15:40.632 [2024-11-05T16:29:53.720Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:40.632 16:29:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:40.632 16:29:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:40.632 16:29:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77885' 00:15:40.632 16:29:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # kill 77885 00:15:40.632 16:29:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@976 -- # wait 77885 00:15:40.632 [2024-11-05 16:29:53.551161] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:41.285 [2024-11-05 16:29:54.041268] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:42.223 16:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:42.223 ************************************ 00:15:42.223 END TEST raid_rebuild_test 00:15:42.223 ************************************ 00:15:42.223 00:15:42.223 real 0m17.377s 00:15:42.223 user 0m19.693s 00:15:42.223 sys 0m3.037s 00:15:42.223 16:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:42.223 16:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.223 16:29:55 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:15:42.223 16:29:55 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:15:42.223 16:29:55 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:42.223 16:29:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:42.223 ************************************ 00:15:42.223 START TEST raid_rebuild_test_sb 00:15:42.223 ************************************ 00:15:42.223 16:29:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 true false true 00:15:42.224 16:29:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:42.224 16:29:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:42.224 16:29:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:42.224 16:29:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:42.224 16:29:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:42.224 16:29:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:42.224 16:29:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:42.224 16:29:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:42.224 16:29:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:42.224 16:29:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:42.224 16:29:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:42.224 16:29:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:42.224 16:29:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:42.224 16:29:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:42.224 16:29:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:42.224 16:29:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:42.224 16:29:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:42.224 16:29:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:42.224 16:29:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:42.224 16:29:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:42.224 16:29:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:42.224 16:29:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:42.224 16:29:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:42.224 16:29:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:42.224 16:29:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:42.224 16:29:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:42.224 16:29:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:42.224 16:29:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:42.224 16:29:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:42.224 16:29:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:42.224 16:29:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78330 00:15:42.224 16:29:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:42.224 16:29:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78330 00:15:42.224 16:29:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 78330 ']' 00:15:42.224 16:29:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.224 16:29:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:42.224 16:29:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.224 16:29:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:42.224 16:29:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.483 [2024-11-05 16:29:55.314792] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:15:42.483 [2024-11-05 16:29:55.315010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78330 ] 00:15:42.483 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:42.483 Zero copy mechanism will not be used. 00:15:42.483 [2024-11-05 16:29:55.495597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.742 [2024-11-05 16:29:55.617242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.742 [2024-11-05 16:29:55.824874] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:42.742 [2024-11-05 16:29:55.824962] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:43.310 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:43.311 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:15:43.311 16:29:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:43.311 16:29:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:43.311 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.311 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.311 BaseBdev1_malloc 00:15:43.311 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.311 16:29:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:43.311 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.311 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.311 [2024-11-05 16:29:56.252398] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:43.311 [2024-11-05 16:29:56.252484] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.311 [2024-11-05 16:29:56.252512] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:43.311 [2024-11-05 16:29:56.252543] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.311 [2024-11-05 16:29:56.255003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.311 [2024-11-05 16:29:56.255048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:43.311 BaseBdev1 00:15:43.311 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.311 16:29:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:43.311 16:29:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:43.311 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.311 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.311 BaseBdev2_malloc 00:15:43.311 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.311 16:29:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:43.311 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.311 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.311 [2024-11-05 16:29:56.310799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:43.311 [2024-11-05 16:29:56.310868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.311 [2024-11-05 16:29:56.310890] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:43.311 [2024-11-05 16:29:56.310902] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.311 [2024-11-05 16:29:56.313114] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.311 [2024-11-05 16:29:56.313160] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:43.311 BaseBdev2 00:15:43.311 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.311 16:29:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:43.311 16:29:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:43.311 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.311 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.311 BaseBdev3_malloc 00:15:43.311 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.311 16:29:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:43.311 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.311 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.311 [2024-11-05 16:29:56.378987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:43.311 [2024-11-05 16:29:56.379046] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.311 [2024-11-05 16:29:56.379068] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:43.311 [2024-11-05 16:29:56.379080] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.311 [2024-11-05 16:29:56.381297] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.311 [2024-11-05 16:29:56.381345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:43.311 BaseBdev3 00:15:43.311 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.311 16:29:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:43.311 16:29:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:43.311 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.311 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.571 BaseBdev4_malloc 00:15:43.571 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.571 16:29:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:43.571 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.571 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.571 [2024-11-05 16:29:56.436399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:43.571 [2024-11-05 16:29:56.436544] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.571 [2024-11-05 16:29:56.436569] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:43.571 [2024-11-05 16:29:56.436579] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.571 [2024-11-05 16:29:56.438654] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.571 [2024-11-05 16:29:56.438703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:43.571 BaseBdev4 00:15:43.571 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.571 16:29:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:43.571 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.571 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.571 spare_malloc 00:15:43.571 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.571 16:29:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:43.571 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.571 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.571 spare_delay 00:15:43.571 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.571 16:29:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:43.571 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.571 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.571 [2024-11-05 16:29:56.504198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:43.571 [2024-11-05 16:29:56.504290] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.571 [2024-11-05 16:29:56.504315] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:43.571 [2024-11-05 16:29:56.504325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.571 [2024-11-05 16:29:56.506573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.571 [2024-11-05 16:29:56.506613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:43.571 spare 00:15:43.571 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.571 16:29:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:43.571 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.571 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.571 [2024-11-05 16:29:56.516238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:43.571 [2024-11-05 16:29:56.518085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:43.571 [2024-11-05 16:29:56.518210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:43.571 [2024-11-05 16:29:56.518271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:43.571 [2024-11-05 16:29:56.518463] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:43.571 [2024-11-05 16:29:56.518481] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:43.571 [2024-11-05 16:29:56.518768] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:43.571 [2024-11-05 16:29:56.518960] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:43.571 [2024-11-05 16:29:56.518971] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:43.571 [2024-11-05 16:29:56.519167] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.571 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.571 16:29:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:43.571 16:29:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:43.571 16:29:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.571 16:29:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:43.571 16:29:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:43.571 16:29:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:43.571 16:29:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.571 16:29:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.571 16:29:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.571 16:29:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.571 16:29:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.571 16:29:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.571 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.571 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.572 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.572 16:29:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.572 "name": "raid_bdev1", 00:15:43.572 "uuid": "ed62ba88-ac4c-4b49-866e-ea5160b958e8", 00:15:43.572 "strip_size_kb": 0, 00:15:43.572 "state": "online", 00:15:43.572 "raid_level": "raid1", 00:15:43.572 "superblock": true, 00:15:43.572 "num_base_bdevs": 4, 00:15:43.572 "num_base_bdevs_discovered": 4, 00:15:43.572 "num_base_bdevs_operational": 4, 00:15:43.572 "base_bdevs_list": [ 00:15:43.572 { 00:15:43.572 "name": "BaseBdev1", 00:15:43.572 "uuid": "163bc7d3-166a-5f52-bc37-46031d59e7cf", 00:15:43.572 "is_configured": true, 00:15:43.572 "data_offset": 2048, 00:15:43.572 "data_size": 63488 00:15:43.572 }, 00:15:43.572 { 00:15:43.572 "name": "BaseBdev2", 00:15:43.572 "uuid": "f953a673-525f-5724-b174-66f2965992c9", 00:15:43.572 "is_configured": true, 00:15:43.572 "data_offset": 2048, 00:15:43.572 "data_size": 63488 00:15:43.572 }, 00:15:43.572 { 00:15:43.572 "name": "BaseBdev3", 00:15:43.572 "uuid": "035cff8b-c0f7-52e5-8ea8-42f1a75d18ff", 00:15:43.572 "is_configured": true, 00:15:43.572 "data_offset": 2048, 00:15:43.572 "data_size": 63488 00:15:43.572 }, 00:15:43.572 { 00:15:43.572 "name": "BaseBdev4", 00:15:43.572 "uuid": "309b1281-3dfa-5993-97e2-ab48f74b5830", 00:15:43.572 "is_configured": true, 00:15:43.572 "data_offset": 2048, 00:15:43.572 "data_size": 63488 00:15:43.572 } 00:15:43.572 ] 00:15:43.572 }' 00:15:43.572 16:29:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.572 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.140 16:29:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:44.140 16:29:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:44.140 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.140 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.140 [2024-11-05 16:29:56.955895] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:44.141 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.141 16:29:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:44.141 16:29:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.141 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.141 16:29:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.141 16:29:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:44.141 16:29:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.141 16:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:44.141 16:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:44.141 16:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:44.141 16:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:44.141 16:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:44.141 16:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:44.141 16:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:44.141 16:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:44.141 16:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:44.141 16:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:44.141 16:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:44.141 16:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:44.141 16:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:44.141 16:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:44.401 [2024-11-05 16:29:57.251032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:44.401 /dev/nbd0 00:15:44.401 16:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:44.401 16:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:44.401 16:29:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:44.401 16:29:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:15:44.401 16:29:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:44.401 16:29:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:44.401 16:29:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:44.401 16:29:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:15:44.401 16:29:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:44.401 16:29:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:44.401 16:29:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:44.401 1+0 records in 00:15:44.401 1+0 records out 00:15:44.401 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361442 s, 11.3 MB/s 00:15:44.401 16:29:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.401 16:29:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:15:44.401 16:29:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.401 16:29:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:44.401 16:29:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:15:44.401 16:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:44.401 16:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:44.401 16:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:44.401 16:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:44.401 16:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:15:51.024 63488+0 records in 00:15:51.024 63488+0 records out 00:15:51.024 32505856 bytes (33 MB, 31 MiB) copied, 5.57181 s, 5.8 MB/s 00:15:51.024 16:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:51.024 16:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:51.024 16:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:51.024 16:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:51.024 16:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:51.024 16:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:51.024 16:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:51.024 [2024-11-05 16:30:03.109161] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.024 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:51.024 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:51.024 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:51.024 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:51.024 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:51.024 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:51.024 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:51.024 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:51.024 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:51.024 16:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.024 16:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.024 [2024-11-05 16:30:03.141378] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:51.024 16:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.024 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:51.024 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.024 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.024 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.024 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.024 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:51.024 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.024 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.024 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.024 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.024 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.024 16:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.024 16:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.024 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.024 16:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.024 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.024 "name": "raid_bdev1", 00:15:51.024 "uuid": "ed62ba88-ac4c-4b49-866e-ea5160b958e8", 00:15:51.024 "strip_size_kb": 0, 00:15:51.024 "state": "online", 00:15:51.024 "raid_level": "raid1", 00:15:51.024 "superblock": true, 00:15:51.024 "num_base_bdevs": 4, 00:15:51.024 "num_base_bdevs_discovered": 3, 00:15:51.024 "num_base_bdevs_operational": 3, 00:15:51.024 "base_bdevs_list": [ 00:15:51.024 { 00:15:51.024 "name": null, 00:15:51.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.024 "is_configured": false, 00:15:51.024 "data_offset": 0, 00:15:51.024 "data_size": 63488 00:15:51.024 }, 00:15:51.024 { 00:15:51.024 "name": "BaseBdev2", 00:15:51.024 "uuid": "f953a673-525f-5724-b174-66f2965992c9", 00:15:51.024 "is_configured": true, 00:15:51.024 "data_offset": 2048, 00:15:51.024 "data_size": 63488 00:15:51.024 }, 00:15:51.024 { 00:15:51.024 "name": "BaseBdev3", 00:15:51.024 "uuid": "035cff8b-c0f7-52e5-8ea8-42f1a75d18ff", 00:15:51.024 "is_configured": true, 00:15:51.024 "data_offset": 2048, 00:15:51.024 "data_size": 63488 00:15:51.024 }, 00:15:51.024 { 00:15:51.024 "name": "BaseBdev4", 00:15:51.024 "uuid": "309b1281-3dfa-5993-97e2-ab48f74b5830", 00:15:51.024 "is_configured": true, 00:15:51.024 "data_offset": 2048, 00:15:51.024 "data_size": 63488 00:15:51.024 } 00:15:51.024 ] 00:15:51.024 }' 00:15:51.024 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.024 16:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.024 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:51.024 16:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.024 16:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.024 [2024-11-05 16:30:03.568708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:51.024 [2024-11-05 16:30:03.586074] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:15:51.024 16:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.024 16:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:51.024 [2024-11-05 16:30:03.588260] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:51.599 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:51.599 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:51.599 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:51.599 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:51.599 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:51.599 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.599 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.599 16:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.599 16:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.599 16:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.599 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:51.599 "name": "raid_bdev1", 00:15:51.599 "uuid": "ed62ba88-ac4c-4b49-866e-ea5160b958e8", 00:15:51.599 "strip_size_kb": 0, 00:15:51.599 "state": "online", 00:15:51.599 "raid_level": "raid1", 00:15:51.599 "superblock": true, 00:15:51.599 "num_base_bdevs": 4, 00:15:51.599 "num_base_bdevs_discovered": 4, 00:15:51.599 "num_base_bdevs_operational": 4, 00:15:51.599 "process": { 00:15:51.599 "type": "rebuild", 00:15:51.599 "target": "spare", 00:15:51.599 "progress": { 00:15:51.599 "blocks": 20480, 00:15:51.599 "percent": 32 00:15:51.599 } 00:15:51.599 }, 00:15:51.599 "base_bdevs_list": [ 00:15:51.599 { 00:15:51.599 "name": "spare", 00:15:51.599 "uuid": "35f893be-4842-5144-a58e-417267562f61", 00:15:51.599 "is_configured": true, 00:15:51.599 "data_offset": 2048, 00:15:51.599 "data_size": 63488 00:15:51.599 }, 00:15:51.599 { 00:15:51.599 "name": "BaseBdev2", 00:15:51.599 "uuid": "f953a673-525f-5724-b174-66f2965992c9", 00:15:51.600 "is_configured": true, 00:15:51.600 "data_offset": 2048, 00:15:51.600 "data_size": 63488 00:15:51.600 }, 00:15:51.600 { 00:15:51.600 "name": "BaseBdev3", 00:15:51.600 "uuid": "035cff8b-c0f7-52e5-8ea8-42f1a75d18ff", 00:15:51.600 "is_configured": true, 00:15:51.600 "data_offset": 2048, 00:15:51.600 "data_size": 63488 00:15:51.600 }, 00:15:51.600 { 00:15:51.600 "name": "BaseBdev4", 00:15:51.600 "uuid": "309b1281-3dfa-5993-97e2-ab48f74b5830", 00:15:51.600 "is_configured": true, 00:15:51.600 "data_offset": 2048, 00:15:51.600 "data_size": 63488 00:15:51.600 } 00:15:51.600 ] 00:15:51.600 }' 00:15:51.600 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:51.859 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:51.859 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:51.859 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:51.859 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:51.859 16:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.859 16:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.859 [2024-11-05 16:30:04.747208] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:51.859 [2024-11-05 16:30:04.794216] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:51.859 [2024-11-05 16:30:04.794362] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.859 [2024-11-05 16:30:04.794384] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:51.859 [2024-11-05 16:30:04.794394] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:51.859 16:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.859 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:51.859 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.859 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.859 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.859 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.859 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:51.859 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.859 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.859 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.859 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.859 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.859 16:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.859 16:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.859 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.859 16:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.859 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.859 "name": "raid_bdev1", 00:15:51.859 "uuid": "ed62ba88-ac4c-4b49-866e-ea5160b958e8", 00:15:51.859 "strip_size_kb": 0, 00:15:51.859 "state": "online", 00:15:51.859 "raid_level": "raid1", 00:15:51.859 "superblock": true, 00:15:51.859 "num_base_bdevs": 4, 00:15:51.859 "num_base_bdevs_discovered": 3, 00:15:51.859 "num_base_bdevs_operational": 3, 00:15:51.859 "base_bdevs_list": [ 00:15:51.859 { 00:15:51.859 "name": null, 00:15:51.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.859 "is_configured": false, 00:15:51.860 "data_offset": 0, 00:15:51.860 "data_size": 63488 00:15:51.860 }, 00:15:51.860 { 00:15:51.860 "name": "BaseBdev2", 00:15:51.860 "uuid": "f953a673-525f-5724-b174-66f2965992c9", 00:15:51.860 "is_configured": true, 00:15:51.860 "data_offset": 2048, 00:15:51.860 "data_size": 63488 00:15:51.860 }, 00:15:51.860 { 00:15:51.860 "name": "BaseBdev3", 00:15:51.860 "uuid": "035cff8b-c0f7-52e5-8ea8-42f1a75d18ff", 00:15:51.860 "is_configured": true, 00:15:51.860 "data_offset": 2048, 00:15:51.860 "data_size": 63488 00:15:51.860 }, 00:15:51.860 { 00:15:51.860 "name": "BaseBdev4", 00:15:51.860 "uuid": "309b1281-3dfa-5993-97e2-ab48f74b5830", 00:15:51.860 "is_configured": true, 00:15:51.860 "data_offset": 2048, 00:15:51.860 "data_size": 63488 00:15:51.860 } 00:15:51.860 ] 00:15:51.860 }' 00:15:51.860 16:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.860 16:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.429 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:52.429 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:52.429 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:52.429 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:52.429 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:52.429 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.429 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.429 16:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.429 16:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.429 16:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.429 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:52.429 "name": "raid_bdev1", 00:15:52.429 "uuid": "ed62ba88-ac4c-4b49-866e-ea5160b958e8", 00:15:52.429 "strip_size_kb": 0, 00:15:52.429 "state": "online", 00:15:52.429 "raid_level": "raid1", 00:15:52.429 "superblock": true, 00:15:52.429 "num_base_bdevs": 4, 00:15:52.429 "num_base_bdevs_discovered": 3, 00:15:52.429 "num_base_bdevs_operational": 3, 00:15:52.429 "base_bdevs_list": [ 00:15:52.429 { 00:15:52.429 "name": null, 00:15:52.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.429 "is_configured": false, 00:15:52.429 "data_offset": 0, 00:15:52.429 "data_size": 63488 00:15:52.429 }, 00:15:52.429 { 00:15:52.429 "name": "BaseBdev2", 00:15:52.429 "uuid": "f953a673-525f-5724-b174-66f2965992c9", 00:15:52.429 "is_configured": true, 00:15:52.429 "data_offset": 2048, 00:15:52.429 "data_size": 63488 00:15:52.429 }, 00:15:52.429 { 00:15:52.429 "name": "BaseBdev3", 00:15:52.429 "uuid": "035cff8b-c0f7-52e5-8ea8-42f1a75d18ff", 00:15:52.429 "is_configured": true, 00:15:52.429 "data_offset": 2048, 00:15:52.429 "data_size": 63488 00:15:52.429 }, 00:15:52.429 { 00:15:52.429 "name": "BaseBdev4", 00:15:52.429 "uuid": "309b1281-3dfa-5993-97e2-ab48f74b5830", 00:15:52.429 "is_configured": true, 00:15:52.429 "data_offset": 2048, 00:15:52.429 "data_size": 63488 00:15:52.429 } 00:15:52.429 ] 00:15:52.429 }' 00:15:52.429 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:52.429 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:52.429 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:52.429 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:52.429 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:52.429 16:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.429 16:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.429 [2024-11-05 16:30:05.444757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:52.429 [2024-11-05 16:30:05.460270] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:15:52.429 16:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.429 16:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:52.429 [2024-11-05 16:30:05.462694] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:53.809 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:53.809 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:53.809 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:53.809 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:53.809 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:53.809 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.809 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.809 16:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.809 16:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.809 16:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.809 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:53.809 "name": "raid_bdev1", 00:15:53.810 "uuid": "ed62ba88-ac4c-4b49-866e-ea5160b958e8", 00:15:53.810 "strip_size_kb": 0, 00:15:53.810 "state": "online", 00:15:53.810 "raid_level": "raid1", 00:15:53.810 "superblock": true, 00:15:53.810 "num_base_bdevs": 4, 00:15:53.810 "num_base_bdevs_discovered": 4, 00:15:53.810 "num_base_bdevs_operational": 4, 00:15:53.810 "process": { 00:15:53.810 "type": "rebuild", 00:15:53.810 "target": "spare", 00:15:53.810 "progress": { 00:15:53.810 "blocks": 20480, 00:15:53.810 "percent": 32 00:15:53.810 } 00:15:53.810 }, 00:15:53.810 "base_bdevs_list": [ 00:15:53.810 { 00:15:53.810 "name": "spare", 00:15:53.810 "uuid": "35f893be-4842-5144-a58e-417267562f61", 00:15:53.810 "is_configured": true, 00:15:53.810 "data_offset": 2048, 00:15:53.810 "data_size": 63488 00:15:53.810 }, 00:15:53.810 { 00:15:53.810 "name": "BaseBdev2", 00:15:53.810 "uuid": "f953a673-525f-5724-b174-66f2965992c9", 00:15:53.810 "is_configured": true, 00:15:53.810 "data_offset": 2048, 00:15:53.810 "data_size": 63488 00:15:53.810 }, 00:15:53.810 { 00:15:53.810 "name": "BaseBdev3", 00:15:53.810 "uuid": "035cff8b-c0f7-52e5-8ea8-42f1a75d18ff", 00:15:53.810 "is_configured": true, 00:15:53.810 "data_offset": 2048, 00:15:53.810 "data_size": 63488 00:15:53.810 }, 00:15:53.810 { 00:15:53.810 "name": "BaseBdev4", 00:15:53.810 "uuid": "309b1281-3dfa-5993-97e2-ab48f74b5830", 00:15:53.810 "is_configured": true, 00:15:53.810 "data_offset": 2048, 00:15:53.810 "data_size": 63488 00:15:53.810 } 00:15:53.810 ] 00:15:53.810 }' 00:15:53.810 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:53.810 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:53.810 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:53.810 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:53.810 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:53.810 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:53.810 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:53.810 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:53.810 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:53.810 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:53.810 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:53.810 16:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.810 16:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.810 [2024-11-05 16:30:06.585513] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:53.810 [2024-11-05 16:30:06.773336] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:15:53.810 16:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.810 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:53.810 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:53.810 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:53.810 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:53.810 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:53.810 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:53.810 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:53.810 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.810 16:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.810 16:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.810 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.810 16:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.810 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:53.810 "name": "raid_bdev1", 00:15:53.810 "uuid": "ed62ba88-ac4c-4b49-866e-ea5160b958e8", 00:15:53.810 "strip_size_kb": 0, 00:15:53.810 "state": "online", 00:15:53.810 "raid_level": "raid1", 00:15:53.810 "superblock": true, 00:15:53.810 "num_base_bdevs": 4, 00:15:53.810 "num_base_bdevs_discovered": 3, 00:15:53.810 "num_base_bdevs_operational": 3, 00:15:53.810 "process": { 00:15:53.810 "type": "rebuild", 00:15:53.810 "target": "spare", 00:15:53.810 "progress": { 00:15:53.810 "blocks": 24576, 00:15:53.810 "percent": 38 00:15:53.810 } 00:15:53.810 }, 00:15:53.810 "base_bdevs_list": [ 00:15:53.810 { 00:15:53.810 "name": "spare", 00:15:53.810 "uuid": "35f893be-4842-5144-a58e-417267562f61", 00:15:53.810 "is_configured": true, 00:15:53.810 "data_offset": 2048, 00:15:53.810 "data_size": 63488 00:15:53.810 }, 00:15:53.810 { 00:15:53.810 "name": null, 00:15:53.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.810 "is_configured": false, 00:15:53.810 "data_offset": 0, 00:15:53.810 "data_size": 63488 00:15:53.810 }, 00:15:53.810 { 00:15:53.810 "name": "BaseBdev3", 00:15:53.810 "uuid": "035cff8b-c0f7-52e5-8ea8-42f1a75d18ff", 00:15:53.810 "is_configured": true, 00:15:53.810 "data_offset": 2048, 00:15:53.810 "data_size": 63488 00:15:53.810 }, 00:15:53.810 { 00:15:53.810 "name": "BaseBdev4", 00:15:53.810 "uuid": "309b1281-3dfa-5993-97e2-ab48f74b5830", 00:15:53.810 "is_configured": true, 00:15:53.810 "data_offset": 2048, 00:15:53.810 "data_size": 63488 00:15:53.810 } 00:15:53.810 ] 00:15:53.810 }' 00:15:53.810 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:53.810 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:53.810 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.070 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:54.070 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=478 00:15:54.070 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:54.070 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:54.070 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.070 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:54.070 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:54.070 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.070 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.070 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.070 16:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.070 16:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.070 16:30:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.070 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:54.070 "name": "raid_bdev1", 00:15:54.070 "uuid": "ed62ba88-ac4c-4b49-866e-ea5160b958e8", 00:15:54.070 "strip_size_kb": 0, 00:15:54.070 "state": "online", 00:15:54.070 "raid_level": "raid1", 00:15:54.070 "superblock": true, 00:15:54.070 "num_base_bdevs": 4, 00:15:54.070 "num_base_bdevs_discovered": 3, 00:15:54.070 "num_base_bdevs_operational": 3, 00:15:54.070 "process": { 00:15:54.070 "type": "rebuild", 00:15:54.070 "target": "spare", 00:15:54.070 "progress": { 00:15:54.070 "blocks": 26624, 00:15:54.070 "percent": 41 00:15:54.070 } 00:15:54.070 }, 00:15:54.070 "base_bdevs_list": [ 00:15:54.070 { 00:15:54.070 "name": "spare", 00:15:54.070 "uuid": "35f893be-4842-5144-a58e-417267562f61", 00:15:54.070 "is_configured": true, 00:15:54.070 "data_offset": 2048, 00:15:54.070 "data_size": 63488 00:15:54.070 }, 00:15:54.070 { 00:15:54.070 "name": null, 00:15:54.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.070 "is_configured": false, 00:15:54.070 "data_offset": 0, 00:15:54.070 "data_size": 63488 00:15:54.070 }, 00:15:54.070 { 00:15:54.070 "name": "BaseBdev3", 00:15:54.070 "uuid": "035cff8b-c0f7-52e5-8ea8-42f1a75d18ff", 00:15:54.070 "is_configured": true, 00:15:54.070 "data_offset": 2048, 00:15:54.070 "data_size": 63488 00:15:54.070 }, 00:15:54.070 { 00:15:54.070 "name": "BaseBdev4", 00:15:54.070 "uuid": "309b1281-3dfa-5993-97e2-ab48f74b5830", 00:15:54.070 "is_configured": true, 00:15:54.070 "data_offset": 2048, 00:15:54.070 "data_size": 63488 00:15:54.070 } 00:15:54.070 ] 00:15:54.070 }' 00:15:54.070 16:30:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.070 16:30:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:54.070 16:30:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.070 16:30:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:54.070 16:30:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:55.007 16:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:55.007 16:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:55.007 16:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:55.007 16:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:55.007 16:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:55.007 16:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:55.007 16:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.007 16:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.007 16:30:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.007 16:30:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.279 16:30:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.279 16:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:55.279 "name": "raid_bdev1", 00:15:55.279 "uuid": "ed62ba88-ac4c-4b49-866e-ea5160b958e8", 00:15:55.279 "strip_size_kb": 0, 00:15:55.279 "state": "online", 00:15:55.280 "raid_level": "raid1", 00:15:55.280 "superblock": true, 00:15:55.280 "num_base_bdevs": 4, 00:15:55.280 "num_base_bdevs_discovered": 3, 00:15:55.280 "num_base_bdevs_operational": 3, 00:15:55.280 "process": { 00:15:55.280 "type": "rebuild", 00:15:55.280 "target": "spare", 00:15:55.280 "progress": { 00:15:55.280 "blocks": 51200, 00:15:55.280 "percent": 80 00:15:55.280 } 00:15:55.280 }, 00:15:55.280 "base_bdevs_list": [ 00:15:55.280 { 00:15:55.280 "name": "spare", 00:15:55.280 "uuid": "35f893be-4842-5144-a58e-417267562f61", 00:15:55.280 "is_configured": true, 00:15:55.280 "data_offset": 2048, 00:15:55.280 "data_size": 63488 00:15:55.280 }, 00:15:55.280 { 00:15:55.280 "name": null, 00:15:55.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.280 "is_configured": false, 00:15:55.280 "data_offset": 0, 00:15:55.280 "data_size": 63488 00:15:55.280 }, 00:15:55.280 { 00:15:55.280 "name": "BaseBdev3", 00:15:55.280 "uuid": "035cff8b-c0f7-52e5-8ea8-42f1a75d18ff", 00:15:55.280 "is_configured": true, 00:15:55.280 "data_offset": 2048, 00:15:55.280 "data_size": 63488 00:15:55.280 }, 00:15:55.280 { 00:15:55.280 "name": "BaseBdev4", 00:15:55.280 "uuid": "309b1281-3dfa-5993-97e2-ab48f74b5830", 00:15:55.280 "is_configured": true, 00:15:55.280 "data_offset": 2048, 00:15:55.280 "data_size": 63488 00:15:55.280 } 00:15:55.280 ] 00:15:55.280 }' 00:15:55.280 16:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:55.280 16:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:55.280 16:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:55.280 16:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:55.280 16:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:55.872 [2024-11-05 16:30:08.690259] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:55.872 [2024-11-05 16:30:08.690506] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:55.872 [2024-11-05 16:30:08.690745] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.131 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:56.131 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:56.131 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:56.131 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:56.131 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:56.131 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:56.131 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.131 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.131 16:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.131 16:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.391 16:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.391 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:56.391 "name": "raid_bdev1", 00:15:56.391 "uuid": "ed62ba88-ac4c-4b49-866e-ea5160b958e8", 00:15:56.391 "strip_size_kb": 0, 00:15:56.391 "state": "online", 00:15:56.391 "raid_level": "raid1", 00:15:56.391 "superblock": true, 00:15:56.391 "num_base_bdevs": 4, 00:15:56.391 "num_base_bdevs_discovered": 3, 00:15:56.391 "num_base_bdevs_operational": 3, 00:15:56.391 "base_bdevs_list": [ 00:15:56.391 { 00:15:56.391 "name": "spare", 00:15:56.391 "uuid": "35f893be-4842-5144-a58e-417267562f61", 00:15:56.391 "is_configured": true, 00:15:56.391 "data_offset": 2048, 00:15:56.391 "data_size": 63488 00:15:56.391 }, 00:15:56.391 { 00:15:56.391 "name": null, 00:15:56.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.391 "is_configured": false, 00:15:56.391 "data_offset": 0, 00:15:56.391 "data_size": 63488 00:15:56.391 }, 00:15:56.391 { 00:15:56.391 "name": "BaseBdev3", 00:15:56.391 "uuid": "035cff8b-c0f7-52e5-8ea8-42f1a75d18ff", 00:15:56.391 "is_configured": true, 00:15:56.391 "data_offset": 2048, 00:15:56.391 "data_size": 63488 00:15:56.391 }, 00:15:56.391 { 00:15:56.391 "name": "BaseBdev4", 00:15:56.391 "uuid": "309b1281-3dfa-5993-97e2-ab48f74b5830", 00:15:56.391 "is_configured": true, 00:15:56.391 "data_offset": 2048, 00:15:56.391 "data_size": 63488 00:15:56.391 } 00:15:56.391 ] 00:15:56.391 }' 00:15:56.391 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:56.391 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:56.391 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:56.391 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:56.391 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:56.391 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:56.391 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:56.391 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:56.391 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:56.391 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:56.391 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.391 16:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.391 16:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.391 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.391 16:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.391 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:56.391 "name": "raid_bdev1", 00:15:56.391 "uuid": "ed62ba88-ac4c-4b49-866e-ea5160b958e8", 00:15:56.391 "strip_size_kb": 0, 00:15:56.391 "state": "online", 00:15:56.391 "raid_level": "raid1", 00:15:56.391 "superblock": true, 00:15:56.391 "num_base_bdevs": 4, 00:15:56.391 "num_base_bdevs_discovered": 3, 00:15:56.391 "num_base_bdevs_operational": 3, 00:15:56.391 "base_bdevs_list": [ 00:15:56.391 { 00:15:56.391 "name": "spare", 00:15:56.391 "uuid": "35f893be-4842-5144-a58e-417267562f61", 00:15:56.391 "is_configured": true, 00:15:56.391 "data_offset": 2048, 00:15:56.391 "data_size": 63488 00:15:56.391 }, 00:15:56.391 { 00:15:56.391 "name": null, 00:15:56.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.391 "is_configured": false, 00:15:56.392 "data_offset": 0, 00:15:56.392 "data_size": 63488 00:15:56.392 }, 00:15:56.392 { 00:15:56.392 "name": "BaseBdev3", 00:15:56.392 "uuid": "035cff8b-c0f7-52e5-8ea8-42f1a75d18ff", 00:15:56.392 "is_configured": true, 00:15:56.392 "data_offset": 2048, 00:15:56.392 "data_size": 63488 00:15:56.392 }, 00:15:56.392 { 00:15:56.392 "name": "BaseBdev4", 00:15:56.392 "uuid": "309b1281-3dfa-5993-97e2-ab48f74b5830", 00:15:56.392 "is_configured": true, 00:15:56.392 "data_offset": 2048, 00:15:56.392 "data_size": 63488 00:15:56.392 } 00:15:56.392 ] 00:15:56.392 }' 00:15:56.392 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:56.392 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:56.392 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:56.651 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:56.651 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:56.651 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.651 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.651 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:56.651 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:56.651 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:56.651 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.651 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.651 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.651 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.651 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.651 16:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.651 16:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.651 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.651 16:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.651 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.651 "name": "raid_bdev1", 00:15:56.651 "uuid": "ed62ba88-ac4c-4b49-866e-ea5160b958e8", 00:15:56.651 "strip_size_kb": 0, 00:15:56.651 "state": "online", 00:15:56.651 "raid_level": "raid1", 00:15:56.651 "superblock": true, 00:15:56.651 "num_base_bdevs": 4, 00:15:56.651 "num_base_bdevs_discovered": 3, 00:15:56.651 "num_base_bdevs_operational": 3, 00:15:56.651 "base_bdevs_list": [ 00:15:56.651 { 00:15:56.651 "name": "spare", 00:15:56.651 "uuid": "35f893be-4842-5144-a58e-417267562f61", 00:15:56.651 "is_configured": true, 00:15:56.651 "data_offset": 2048, 00:15:56.651 "data_size": 63488 00:15:56.651 }, 00:15:56.651 { 00:15:56.651 "name": null, 00:15:56.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.651 "is_configured": false, 00:15:56.651 "data_offset": 0, 00:15:56.651 "data_size": 63488 00:15:56.651 }, 00:15:56.651 { 00:15:56.651 "name": "BaseBdev3", 00:15:56.651 "uuid": "035cff8b-c0f7-52e5-8ea8-42f1a75d18ff", 00:15:56.651 "is_configured": true, 00:15:56.651 "data_offset": 2048, 00:15:56.651 "data_size": 63488 00:15:56.651 }, 00:15:56.651 { 00:15:56.651 "name": "BaseBdev4", 00:15:56.651 "uuid": "309b1281-3dfa-5993-97e2-ab48f74b5830", 00:15:56.651 "is_configured": true, 00:15:56.651 "data_offset": 2048, 00:15:56.651 "data_size": 63488 00:15:56.651 } 00:15:56.651 ] 00:15:56.651 }' 00:15:56.651 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.651 16:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.910 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:56.910 16:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.910 16:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.910 [2024-11-05 16:30:09.951250] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:56.910 [2024-11-05 16:30:09.951319] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:56.910 [2024-11-05 16:30:09.951457] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:56.910 [2024-11-05 16:30:09.951594] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:56.910 [2024-11-05 16:30:09.951614] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:56.910 16:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.910 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.910 16:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:56.910 16:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.910 16:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.910 16:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.170 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:57.170 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:57.170 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:57.170 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:57.170 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:57.170 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:57.170 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:57.170 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:57.170 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:57.170 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:57.170 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:57.170 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:57.170 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:57.170 /dev/nbd0 00:15:57.170 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:57.170 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:57.170 16:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:57.170 16:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:15:57.170 16:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:57.170 16:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:57.170 16:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:57.170 16:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:15:57.170 16:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:57.170 16:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:57.170 16:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:57.170 1+0 records in 00:15:57.170 1+0 records out 00:15:57.170 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000533511 s, 7.7 MB/s 00:15:57.170 16:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.428 16:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:15:57.428 16:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.428 16:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:57.428 16:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:15:57.428 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:57.428 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:57.428 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:57.428 /dev/nbd1 00:15:57.428 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:57.428 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:57.428 16:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:57.428 16:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:15:57.428 16:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:57.428 16:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:57.428 16:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:57.428 16:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:15:57.428 16:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:57.428 16:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:57.428 16:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:57.428 1+0 records in 00:15:57.428 1+0 records out 00:15:57.428 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407007 s, 10.1 MB/s 00:15:57.428 16:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.688 16:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:15:57.688 16:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.688 16:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:57.688 16:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:15:57.688 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:57.688 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:57.688 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:57.688 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:57.688 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:57.688 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:57.688 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:57.688 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:57.688 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:57.688 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:57.947 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:57.947 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:57.947 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:57.947 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:57.947 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:57.947 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:57.947 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:57.947 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:57.947 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:57.947 16:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:58.206 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:58.206 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:58.206 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:58.206 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:58.206 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:58.207 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:58.207 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:58.207 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:58.207 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:58.207 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:58.207 16:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.207 16:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.207 16:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.207 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:58.207 16:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.207 16:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.207 [2024-11-05 16:30:11.208449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:58.207 [2024-11-05 16:30:11.208577] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.207 [2024-11-05 16:30:11.208611] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:58.207 [2024-11-05 16:30:11.208623] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.207 [2024-11-05 16:30:11.211580] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.207 [2024-11-05 16:30:11.211622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:58.207 [2024-11-05 16:30:11.211751] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:58.207 [2024-11-05 16:30:11.211810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:58.207 [2024-11-05 16:30:11.211970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:58.207 [2024-11-05 16:30:11.212082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:58.207 spare 00:15:58.207 16:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.207 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:58.207 16:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.207 16:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.465 [2024-11-05 16:30:11.312017] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:58.465 [2024-11-05 16:30:11.312073] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:58.465 [2024-11-05 16:30:11.312591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:15:58.465 [2024-11-05 16:30:11.312866] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:58.465 [2024-11-05 16:30:11.312886] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:58.465 [2024-11-05 16:30:11.313151] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.465 16:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.465 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:58.465 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.465 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.465 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:58.465 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:58.465 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:58.465 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.465 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.465 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.465 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.465 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.465 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.465 16:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.465 16:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.465 16:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.465 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.465 "name": "raid_bdev1", 00:15:58.465 "uuid": "ed62ba88-ac4c-4b49-866e-ea5160b958e8", 00:15:58.465 "strip_size_kb": 0, 00:15:58.465 "state": "online", 00:15:58.465 "raid_level": "raid1", 00:15:58.465 "superblock": true, 00:15:58.465 "num_base_bdevs": 4, 00:15:58.465 "num_base_bdevs_discovered": 3, 00:15:58.465 "num_base_bdevs_operational": 3, 00:15:58.465 "base_bdevs_list": [ 00:15:58.465 { 00:15:58.465 "name": "spare", 00:15:58.465 "uuid": "35f893be-4842-5144-a58e-417267562f61", 00:15:58.465 "is_configured": true, 00:15:58.465 "data_offset": 2048, 00:15:58.465 "data_size": 63488 00:15:58.465 }, 00:15:58.465 { 00:15:58.465 "name": null, 00:15:58.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.465 "is_configured": false, 00:15:58.465 "data_offset": 2048, 00:15:58.466 "data_size": 63488 00:15:58.466 }, 00:15:58.466 { 00:15:58.466 "name": "BaseBdev3", 00:15:58.466 "uuid": "035cff8b-c0f7-52e5-8ea8-42f1a75d18ff", 00:15:58.466 "is_configured": true, 00:15:58.466 "data_offset": 2048, 00:15:58.466 "data_size": 63488 00:15:58.466 }, 00:15:58.466 { 00:15:58.466 "name": "BaseBdev4", 00:15:58.466 "uuid": "309b1281-3dfa-5993-97e2-ab48f74b5830", 00:15:58.466 "is_configured": true, 00:15:58.466 "data_offset": 2048, 00:15:58.466 "data_size": 63488 00:15:58.466 } 00:15:58.466 ] 00:15:58.466 }' 00:15:58.466 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.466 16:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.033 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:59.033 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.033 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:59.033 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:59.033 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.033 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.033 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.033 16:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.033 16:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.033 16:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.033 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.033 "name": "raid_bdev1", 00:15:59.033 "uuid": "ed62ba88-ac4c-4b49-866e-ea5160b958e8", 00:15:59.033 "strip_size_kb": 0, 00:15:59.033 "state": "online", 00:15:59.033 "raid_level": "raid1", 00:15:59.033 "superblock": true, 00:15:59.033 "num_base_bdevs": 4, 00:15:59.033 "num_base_bdevs_discovered": 3, 00:15:59.033 "num_base_bdevs_operational": 3, 00:15:59.033 "base_bdevs_list": [ 00:15:59.033 { 00:15:59.033 "name": "spare", 00:15:59.033 "uuid": "35f893be-4842-5144-a58e-417267562f61", 00:15:59.033 "is_configured": true, 00:15:59.033 "data_offset": 2048, 00:15:59.033 "data_size": 63488 00:15:59.033 }, 00:15:59.033 { 00:15:59.033 "name": null, 00:15:59.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.033 "is_configured": false, 00:15:59.033 "data_offset": 2048, 00:15:59.033 "data_size": 63488 00:15:59.033 }, 00:15:59.033 { 00:15:59.033 "name": "BaseBdev3", 00:15:59.033 "uuid": "035cff8b-c0f7-52e5-8ea8-42f1a75d18ff", 00:15:59.033 "is_configured": true, 00:15:59.033 "data_offset": 2048, 00:15:59.033 "data_size": 63488 00:15:59.033 }, 00:15:59.033 { 00:15:59.033 "name": "BaseBdev4", 00:15:59.033 "uuid": "309b1281-3dfa-5993-97e2-ab48f74b5830", 00:15:59.033 "is_configured": true, 00:15:59.033 "data_offset": 2048, 00:15:59.033 "data_size": 63488 00:15:59.033 } 00:15:59.033 ] 00:15:59.033 }' 00:15:59.033 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.033 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:59.033 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.033 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:59.033 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.033 16:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.033 16:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.033 16:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:59.033 16:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.033 16:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:59.033 16:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:59.033 16:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.033 16:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.033 [2024-11-05 16:30:12.019985] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:59.033 16:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.033 16:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:59.033 16:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.033 16:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.033 16:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.033 16:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.033 16:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:59.033 16:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.033 16:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.033 16:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.033 16:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.033 16:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.033 16:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.033 16:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.033 16:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.033 16:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.033 16:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.033 "name": "raid_bdev1", 00:15:59.033 "uuid": "ed62ba88-ac4c-4b49-866e-ea5160b958e8", 00:15:59.033 "strip_size_kb": 0, 00:15:59.033 "state": "online", 00:15:59.033 "raid_level": "raid1", 00:15:59.033 "superblock": true, 00:15:59.033 "num_base_bdevs": 4, 00:15:59.033 "num_base_bdevs_discovered": 2, 00:15:59.033 "num_base_bdevs_operational": 2, 00:15:59.033 "base_bdevs_list": [ 00:15:59.033 { 00:15:59.033 "name": null, 00:15:59.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.034 "is_configured": false, 00:15:59.034 "data_offset": 0, 00:15:59.034 "data_size": 63488 00:15:59.034 }, 00:15:59.034 { 00:15:59.034 "name": null, 00:15:59.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.034 "is_configured": false, 00:15:59.034 "data_offset": 2048, 00:15:59.034 "data_size": 63488 00:15:59.034 }, 00:15:59.034 { 00:15:59.034 "name": "BaseBdev3", 00:15:59.034 "uuid": "035cff8b-c0f7-52e5-8ea8-42f1a75d18ff", 00:15:59.034 "is_configured": true, 00:15:59.034 "data_offset": 2048, 00:15:59.034 "data_size": 63488 00:15:59.034 }, 00:15:59.034 { 00:15:59.034 "name": "BaseBdev4", 00:15:59.034 "uuid": "309b1281-3dfa-5993-97e2-ab48f74b5830", 00:15:59.034 "is_configured": true, 00:15:59.034 "data_offset": 2048, 00:15:59.034 "data_size": 63488 00:15:59.034 } 00:15:59.034 ] 00:15:59.034 }' 00:15:59.034 16:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.034 16:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.632 16:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:59.632 16:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.632 16:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.632 [2024-11-05 16:30:12.479220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:59.632 [2024-11-05 16:30:12.479615] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:59.632 [2024-11-05 16:30:12.479681] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:59.632 [2024-11-05 16:30:12.479754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:59.632 [2024-11-05 16:30:12.494943] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:15:59.632 16:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.632 16:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:59.632 [2024-11-05 16:30:12.497358] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:00.570 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:00.570 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.570 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:00.570 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:00.570 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.570 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.570 16:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.570 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.570 16:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.570 16:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.570 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.570 "name": "raid_bdev1", 00:16:00.570 "uuid": "ed62ba88-ac4c-4b49-866e-ea5160b958e8", 00:16:00.570 "strip_size_kb": 0, 00:16:00.570 "state": "online", 00:16:00.570 "raid_level": "raid1", 00:16:00.570 "superblock": true, 00:16:00.570 "num_base_bdevs": 4, 00:16:00.570 "num_base_bdevs_discovered": 3, 00:16:00.570 "num_base_bdevs_operational": 3, 00:16:00.570 "process": { 00:16:00.570 "type": "rebuild", 00:16:00.570 "target": "spare", 00:16:00.570 "progress": { 00:16:00.570 "blocks": 20480, 00:16:00.570 "percent": 32 00:16:00.570 } 00:16:00.570 }, 00:16:00.570 "base_bdevs_list": [ 00:16:00.570 { 00:16:00.570 "name": "spare", 00:16:00.570 "uuid": "35f893be-4842-5144-a58e-417267562f61", 00:16:00.570 "is_configured": true, 00:16:00.570 "data_offset": 2048, 00:16:00.570 "data_size": 63488 00:16:00.570 }, 00:16:00.570 { 00:16:00.570 "name": null, 00:16:00.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.570 "is_configured": false, 00:16:00.570 "data_offset": 2048, 00:16:00.570 "data_size": 63488 00:16:00.570 }, 00:16:00.570 { 00:16:00.570 "name": "BaseBdev3", 00:16:00.570 "uuid": "035cff8b-c0f7-52e5-8ea8-42f1a75d18ff", 00:16:00.570 "is_configured": true, 00:16:00.570 "data_offset": 2048, 00:16:00.570 "data_size": 63488 00:16:00.570 }, 00:16:00.570 { 00:16:00.570 "name": "BaseBdev4", 00:16:00.570 "uuid": "309b1281-3dfa-5993-97e2-ab48f74b5830", 00:16:00.570 "is_configured": true, 00:16:00.570 "data_offset": 2048, 00:16:00.570 "data_size": 63488 00:16:00.570 } 00:16:00.570 ] 00:16:00.570 }' 00:16:00.570 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.570 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:00.570 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.570 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:00.570 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:00.570 16:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.570 16:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.570 [2024-11-05 16:30:13.653011] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:00.830 [2024-11-05 16:30:13.707744] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:00.830 [2024-11-05 16:30:13.707971] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.830 [2024-11-05 16:30:13.708025] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:00.830 [2024-11-05 16:30:13.708051] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:00.830 16:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.830 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:00.830 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.830 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.830 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:00.830 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:00.830 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:00.830 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.830 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.830 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.830 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.830 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.830 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.830 16:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.830 16:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.830 16:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.830 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.830 "name": "raid_bdev1", 00:16:00.830 "uuid": "ed62ba88-ac4c-4b49-866e-ea5160b958e8", 00:16:00.830 "strip_size_kb": 0, 00:16:00.830 "state": "online", 00:16:00.830 "raid_level": "raid1", 00:16:00.830 "superblock": true, 00:16:00.830 "num_base_bdevs": 4, 00:16:00.830 "num_base_bdevs_discovered": 2, 00:16:00.830 "num_base_bdevs_operational": 2, 00:16:00.830 "base_bdevs_list": [ 00:16:00.830 { 00:16:00.830 "name": null, 00:16:00.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.830 "is_configured": false, 00:16:00.830 "data_offset": 0, 00:16:00.830 "data_size": 63488 00:16:00.830 }, 00:16:00.830 { 00:16:00.830 "name": null, 00:16:00.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.830 "is_configured": false, 00:16:00.830 "data_offset": 2048, 00:16:00.830 "data_size": 63488 00:16:00.830 }, 00:16:00.830 { 00:16:00.830 "name": "BaseBdev3", 00:16:00.830 "uuid": "035cff8b-c0f7-52e5-8ea8-42f1a75d18ff", 00:16:00.830 "is_configured": true, 00:16:00.830 "data_offset": 2048, 00:16:00.830 "data_size": 63488 00:16:00.830 }, 00:16:00.830 { 00:16:00.830 "name": "BaseBdev4", 00:16:00.830 "uuid": "309b1281-3dfa-5993-97e2-ab48f74b5830", 00:16:00.830 "is_configured": true, 00:16:00.830 "data_offset": 2048, 00:16:00.830 "data_size": 63488 00:16:00.830 } 00:16:00.830 ] 00:16:00.830 }' 00:16:00.830 16:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.830 16:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.399 16:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:01.399 16:30:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.399 16:30:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.399 [2024-11-05 16:30:14.216186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:01.399 [2024-11-05 16:30:14.216315] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.399 [2024-11-05 16:30:14.216358] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:16:01.399 [2024-11-05 16:30:14.216371] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.399 [2024-11-05 16:30:14.217078] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.399 [2024-11-05 16:30:14.217117] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:01.399 [2024-11-05 16:30:14.217252] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:01.399 [2024-11-05 16:30:14.217271] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:01.399 [2024-11-05 16:30:14.217295] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:01.399 [2024-11-05 16:30:14.217339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:01.399 [2024-11-05 16:30:14.235854] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:16:01.399 spare 00:16:01.399 16:30:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.399 16:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:01.399 [2024-11-05 16:30:14.238567] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:02.338 16:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:02.338 16:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.338 16:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:02.338 16:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:02.338 16:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.338 16:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.338 16:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.338 16:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.338 16:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.338 16:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.338 16:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.338 "name": "raid_bdev1", 00:16:02.338 "uuid": "ed62ba88-ac4c-4b49-866e-ea5160b958e8", 00:16:02.338 "strip_size_kb": 0, 00:16:02.338 "state": "online", 00:16:02.338 "raid_level": "raid1", 00:16:02.338 "superblock": true, 00:16:02.338 "num_base_bdevs": 4, 00:16:02.338 "num_base_bdevs_discovered": 3, 00:16:02.338 "num_base_bdevs_operational": 3, 00:16:02.338 "process": { 00:16:02.338 "type": "rebuild", 00:16:02.338 "target": "spare", 00:16:02.338 "progress": { 00:16:02.338 "blocks": 20480, 00:16:02.338 "percent": 32 00:16:02.338 } 00:16:02.338 }, 00:16:02.338 "base_bdevs_list": [ 00:16:02.338 { 00:16:02.338 "name": "spare", 00:16:02.338 "uuid": "35f893be-4842-5144-a58e-417267562f61", 00:16:02.338 "is_configured": true, 00:16:02.338 "data_offset": 2048, 00:16:02.338 "data_size": 63488 00:16:02.338 }, 00:16:02.338 { 00:16:02.338 "name": null, 00:16:02.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.338 "is_configured": false, 00:16:02.338 "data_offset": 2048, 00:16:02.338 "data_size": 63488 00:16:02.338 }, 00:16:02.338 { 00:16:02.338 "name": "BaseBdev3", 00:16:02.338 "uuid": "035cff8b-c0f7-52e5-8ea8-42f1a75d18ff", 00:16:02.338 "is_configured": true, 00:16:02.338 "data_offset": 2048, 00:16:02.338 "data_size": 63488 00:16:02.338 }, 00:16:02.338 { 00:16:02.338 "name": "BaseBdev4", 00:16:02.338 "uuid": "309b1281-3dfa-5993-97e2-ab48f74b5830", 00:16:02.338 "is_configured": true, 00:16:02.338 "data_offset": 2048, 00:16:02.338 "data_size": 63488 00:16:02.338 } 00:16:02.338 ] 00:16:02.338 }' 00:16:02.338 16:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.338 16:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:02.338 16:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.338 16:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:02.338 16:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:02.338 16:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.338 16:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.338 [2024-11-05 16:30:15.389632] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:02.599 [2024-11-05 16:30:15.447565] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:02.599 [2024-11-05 16:30:15.447732] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.599 [2024-11-05 16:30:15.447769] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:02.599 [2024-11-05 16:30:15.447780] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:02.599 16:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.599 16:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:02.599 16:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:02.599 16:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:02.599 16:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:02.599 16:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:02.599 16:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:02.599 16:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.599 16:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.599 16:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.599 16:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.599 16:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.599 16:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.599 16:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.599 16:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.599 16:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.599 16:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.599 "name": "raid_bdev1", 00:16:02.599 "uuid": "ed62ba88-ac4c-4b49-866e-ea5160b958e8", 00:16:02.599 "strip_size_kb": 0, 00:16:02.599 "state": "online", 00:16:02.599 "raid_level": "raid1", 00:16:02.599 "superblock": true, 00:16:02.599 "num_base_bdevs": 4, 00:16:02.599 "num_base_bdevs_discovered": 2, 00:16:02.599 "num_base_bdevs_operational": 2, 00:16:02.599 "base_bdevs_list": [ 00:16:02.599 { 00:16:02.599 "name": null, 00:16:02.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.599 "is_configured": false, 00:16:02.599 "data_offset": 0, 00:16:02.599 "data_size": 63488 00:16:02.599 }, 00:16:02.599 { 00:16:02.599 "name": null, 00:16:02.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.599 "is_configured": false, 00:16:02.599 "data_offset": 2048, 00:16:02.599 "data_size": 63488 00:16:02.599 }, 00:16:02.599 { 00:16:02.599 "name": "BaseBdev3", 00:16:02.599 "uuid": "035cff8b-c0f7-52e5-8ea8-42f1a75d18ff", 00:16:02.599 "is_configured": true, 00:16:02.599 "data_offset": 2048, 00:16:02.599 "data_size": 63488 00:16:02.599 }, 00:16:02.599 { 00:16:02.599 "name": "BaseBdev4", 00:16:02.599 "uuid": "309b1281-3dfa-5993-97e2-ab48f74b5830", 00:16:02.599 "is_configured": true, 00:16:02.599 "data_offset": 2048, 00:16:02.599 "data_size": 63488 00:16:02.599 } 00:16:02.599 ] 00:16:02.599 }' 00:16:02.599 16:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.599 16:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.168 16:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:03.168 16:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.168 16:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:03.168 16:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:03.168 16:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.168 16:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.168 16:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.168 16:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.168 16:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.168 16:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.168 16:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.168 "name": "raid_bdev1", 00:16:03.168 "uuid": "ed62ba88-ac4c-4b49-866e-ea5160b958e8", 00:16:03.168 "strip_size_kb": 0, 00:16:03.168 "state": "online", 00:16:03.168 "raid_level": "raid1", 00:16:03.168 "superblock": true, 00:16:03.168 "num_base_bdevs": 4, 00:16:03.168 "num_base_bdevs_discovered": 2, 00:16:03.168 "num_base_bdevs_operational": 2, 00:16:03.168 "base_bdevs_list": [ 00:16:03.168 { 00:16:03.168 "name": null, 00:16:03.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.168 "is_configured": false, 00:16:03.168 "data_offset": 0, 00:16:03.168 "data_size": 63488 00:16:03.168 }, 00:16:03.168 { 00:16:03.168 "name": null, 00:16:03.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.168 "is_configured": false, 00:16:03.168 "data_offset": 2048, 00:16:03.168 "data_size": 63488 00:16:03.168 }, 00:16:03.168 { 00:16:03.168 "name": "BaseBdev3", 00:16:03.168 "uuid": "035cff8b-c0f7-52e5-8ea8-42f1a75d18ff", 00:16:03.168 "is_configured": true, 00:16:03.168 "data_offset": 2048, 00:16:03.168 "data_size": 63488 00:16:03.168 }, 00:16:03.168 { 00:16:03.168 "name": "BaseBdev4", 00:16:03.168 "uuid": "309b1281-3dfa-5993-97e2-ab48f74b5830", 00:16:03.168 "is_configured": true, 00:16:03.168 "data_offset": 2048, 00:16:03.168 "data_size": 63488 00:16:03.168 } 00:16:03.168 ] 00:16:03.168 }' 00:16:03.168 16:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.168 16:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:03.168 16:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.168 16:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:03.168 16:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:03.168 16:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.168 16:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.168 16:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.168 16:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:03.168 16:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.168 16:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.168 [2024-11-05 16:30:16.109237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:03.168 [2024-11-05 16:30:16.109401] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.168 [2024-11-05 16:30:16.109440] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:16:03.168 [2024-11-05 16:30:16.109456] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.168 [2024-11-05 16:30:16.110089] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.168 [2024-11-05 16:30:16.110114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:03.168 [2024-11-05 16:30:16.110215] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:03.168 [2024-11-05 16:30:16.110236] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:03.168 [2024-11-05 16:30:16.110247] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:03.168 [2024-11-05 16:30:16.110275] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:03.168 BaseBdev1 00:16:03.168 16:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.168 16:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:04.104 16:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:04.104 16:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.104 16:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.104 16:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:04.104 16:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:04.104 16:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:04.104 16:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.104 16:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.104 16:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.104 16:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.104 16:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.104 16:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.104 16:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.104 16:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.104 16:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.104 16:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.104 "name": "raid_bdev1", 00:16:04.104 "uuid": "ed62ba88-ac4c-4b49-866e-ea5160b958e8", 00:16:04.104 "strip_size_kb": 0, 00:16:04.104 "state": "online", 00:16:04.104 "raid_level": "raid1", 00:16:04.104 "superblock": true, 00:16:04.104 "num_base_bdevs": 4, 00:16:04.104 "num_base_bdevs_discovered": 2, 00:16:04.104 "num_base_bdevs_operational": 2, 00:16:04.104 "base_bdevs_list": [ 00:16:04.104 { 00:16:04.104 "name": null, 00:16:04.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.104 "is_configured": false, 00:16:04.104 "data_offset": 0, 00:16:04.104 "data_size": 63488 00:16:04.104 }, 00:16:04.104 { 00:16:04.104 "name": null, 00:16:04.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.104 "is_configured": false, 00:16:04.104 "data_offset": 2048, 00:16:04.104 "data_size": 63488 00:16:04.104 }, 00:16:04.104 { 00:16:04.104 "name": "BaseBdev3", 00:16:04.104 "uuid": "035cff8b-c0f7-52e5-8ea8-42f1a75d18ff", 00:16:04.104 "is_configured": true, 00:16:04.104 "data_offset": 2048, 00:16:04.104 "data_size": 63488 00:16:04.104 }, 00:16:04.104 { 00:16:04.104 "name": "BaseBdev4", 00:16:04.104 "uuid": "309b1281-3dfa-5993-97e2-ab48f74b5830", 00:16:04.104 "is_configured": true, 00:16:04.104 "data_offset": 2048, 00:16:04.104 "data_size": 63488 00:16:04.104 } 00:16:04.104 ] 00:16:04.104 }' 00:16:04.104 16:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.104 16:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.670 16:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:04.670 16:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.670 16:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:04.670 16:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:04.670 16:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.670 16:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.670 16:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.670 16:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.670 16:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.670 16:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.670 16:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.670 "name": "raid_bdev1", 00:16:04.670 "uuid": "ed62ba88-ac4c-4b49-866e-ea5160b958e8", 00:16:04.670 "strip_size_kb": 0, 00:16:04.670 "state": "online", 00:16:04.670 "raid_level": "raid1", 00:16:04.670 "superblock": true, 00:16:04.670 "num_base_bdevs": 4, 00:16:04.670 "num_base_bdevs_discovered": 2, 00:16:04.670 "num_base_bdevs_operational": 2, 00:16:04.670 "base_bdevs_list": [ 00:16:04.670 { 00:16:04.670 "name": null, 00:16:04.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.670 "is_configured": false, 00:16:04.670 "data_offset": 0, 00:16:04.670 "data_size": 63488 00:16:04.670 }, 00:16:04.670 { 00:16:04.670 "name": null, 00:16:04.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.670 "is_configured": false, 00:16:04.670 "data_offset": 2048, 00:16:04.670 "data_size": 63488 00:16:04.670 }, 00:16:04.670 { 00:16:04.670 "name": "BaseBdev3", 00:16:04.670 "uuid": "035cff8b-c0f7-52e5-8ea8-42f1a75d18ff", 00:16:04.670 "is_configured": true, 00:16:04.670 "data_offset": 2048, 00:16:04.670 "data_size": 63488 00:16:04.670 }, 00:16:04.670 { 00:16:04.670 "name": "BaseBdev4", 00:16:04.670 "uuid": "309b1281-3dfa-5993-97e2-ab48f74b5830", 00:16:04.670 "is_configured": true, 00:16:04.670 "data_offset": 2048, 00:16:04.670 "data_size": 63488 00:16:04.670 } 00:16:04.670 ] 00:16:04.670 }' 00:16:04.670 16:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.670 16:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:04.670 16:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.670 16:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:04.670 16:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:04.670 16:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:16:04.670 16:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:04.670 16:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:04.670 16:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:04.670 16:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:04.670 16:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:04.670 16:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:04.670 16:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.670 16:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.670 [2024-11-05 16:30:17.698754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:04.670 [2024-11-05 16:30:17.699029] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:04.670 [2024-11-05 16:30:17.699094] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:04.670 request: 00:16:04.670 { 00:16:04.670 "base_bdev": "BaseBdev1", 00:16:04.670 "raid_bdev": "raid_bdev1", 00:16:04.670 "method": "bdev_raid_add_base_bdev", 00:16:04.670 "req_id": 1 00:16:04.670 } 00:16:04.670 Got JSON-RPC error response 00:16:04.670 response: 00:16:04.670 { 00:16:04.670 "code": -22, 00:16:04.670 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:04.670 } 00:16:04.670 16:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:04.670 16:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:16:04.670 16:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:04.670 16:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:04.670 16:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:04.670 16:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:06.045 16:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:06.045 16:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.045 16:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.045 16:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:06.045 16:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:06.045 16:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:06.045 16:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.045 16:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.045 16:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.045 16:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.045 16:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.045 16:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.045 16:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.045 16:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.045 16:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.045 16:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.045 "name": "raid_bdev1", 00:16:06.045 "uuid": "ed62ba88-ac4c-4b49-866e-ea5160b958e8", 00:16:06.045 "strip_size_kb": 0, 00:16:06.045 "state": "online", 00:16:06.045 "raid_level": "raid1", 00:16:06.045 "superblock": true, 00:16:06.045 "num_base_bdevs": 4, 00:16:06.045 "num_base_bdevs_discovered": 2, 00:16:06.045 "num_base_bdevs_operational": 2, 00:16:06.045 "base_bdevs_list": [ 00:16:06.045 { 00:16:06.045 "name": null, 00:16:06.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.045 "is_configured": false, 00:16:06.045 "data_offset": 0, 00:16:06.045 "data_size": 63488 00:16:06.045 }, 00:16:06.045 { 00:16:06.045 "name": null, 00:16:06.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.045 "is_configured": false, 00:16:06.045 "data_offset": 2048, 00:16:06.045 "data_size": 63488 00:16:06.045 }, 00:16:06.045 { 00:16:06.045 "name": "BaseBdev3", 00:16:06.045 "uuid": "035cff8b-c0f7-52e5-8ea8-42f1a75d18ff", 00:16:06.045 "is_configured": true, 00:16:06.045 "data_offset": 2048, 00:16:06.045 "data_size": 63488 00:16:06.045 }, 00:16:06.045 { 00:16:06.045 "name": "BaseBdev4", 00:16:06.045 "uuid": "309b1281-3dfa-5993-97e2-ab48f74b5830", 00:16:06.045 "is_configured": true, 00:16:06.045 "data_offset": 2048, 00:16:06.045 "data_size": 63488 00:16:06.045 } 00:16:06.045 ] 00:16:06.045 }' 00:16:06.045 16:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.045 16:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.304 16:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:06.304 16:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.304 16:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:06.304 16:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:06.304 16:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.304 16:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.304 16:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.304 16:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.304 16:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.304 16:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.304 16:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.304 "name": "raid_bdev1", 00:16:06.304 "uuid": "ed62ba88-ac4c-4b49-866e-ea5160b958e8", 00:16:06.304 "strip_size_kb": 0, 00:16:06.304 "state": "online", 00:16:06.304 "raid_level": "raid1", 00:16:06.304 "superblock": true, 00:16:06.304 "num_base_bdevs": 4, 00:16:06.304 "num_base_bdevs_discovered": 2, 00:16:06.304 "num_base_bdevs_operational": 2, 00:16:06.304 "base_bdevs_list": [ 00:16:06.304 { 00:16:06.304 "name": null, 00:16:06.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.304 "is_configured": false, 00:16:06.304 "data_offset": 0, 00:16:06.304 "data_size": 63488 00:16:06.304 }, 00:16:06.304 { 00:16:06.304 "name": null, 00:16:06.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.304 "is_configured": false, 00:16:06.304 "data_offset": 2048, 00:16:06.304 "data_size": 63488 00:16:06.304 }, 00:16:06.304 { 00:16:06.304 "name": "BaseBdev3", 00:16:06.304 "uuid": "035cff8b-c0f7-52e5-8ea8-42f1a75d18ff", 00:16:06.304 "is_configured": true, 00:16:06.304 "data_offset": 2048, 00:16:06.304 "data_size": 63488 00:16:06.304 }, 00:16:06.304 { 00:16:06.304 "name": "BaseBdev4", 00:16:06.304 "uuid": "309b1281-3dfa-5993-97e2-ab48f74b5830", 00:16:06.304 "is_configured": true, 00:16:06.304 "data_offset": 2048, 00:16:06.304 "data_size": 63488 00:16:06.304 } 00:16:06.304 ] 00:16:06.304 }' 00:16:06.304 16:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.304 16:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:06.304 16:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.304 16:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:06.304 16:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78330 00:16:06.304 16:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 78330 ']' 00:16:06.304 16:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 78330 00:16:06.304 16:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:16:06.304 16:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:06.304 16:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78330 00:16:06.304 16:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:06.304 16:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:06.305 16:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78330' 00:16:06.305 killing process with pid 78330 00:16:06.305 Received shutdown signal, test time was about 60.000000 seconds 00:16:06.305 00:16:06.305 Latency(us) 00:16:06.305 [2024-11-05T16:30:19.393Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.305 [2024-11-05T16:30:19.393Z] =================================================================================================================== 00:16:06.305 [2024-11-05T16:30:19.393Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:06.305 16:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 78330 00:16:06.305 [2024-11-05 16:30:19.362543] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:06.305 [2024-11-05 16:30:19.362689] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:06.305 16:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 78330 00:16:06.305 [2024-11-05 16:30:19.362770] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:06.305 [2024-11-05 16:30:19.362793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:06.872 [2024-11-05 16:30:19.895819] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:08.248 00:16:08.248 real 0m25.832s 00:16:08.248 user 0m31.062s 00:16:08.248 sys 0m3.907s 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:08.248 ************************************ 00:16:08.248 END TEST raid_rebuild_test_sb 00:16:08.248 ************************************ 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.248 16:30:21 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:16:08.248 16:30:21 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:16:08.248 16:30:21 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:08.248 16:30:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:08.248 ************************************ 00:16:08.248 START TEST raid_rebuild_test_io 00:16:08.248 ************************************ 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 false true true 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79089 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79089 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # '[' -z 79089 ']' 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:08.248 16:30:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:08.248 [2024-11-05 16:30:21.219039] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:16:08.248 [2024-11-05 16:30:21.219249] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:16:08.248 Zero copy mechanism will not be used. 00:16:08.248 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79089 ] 00:16:08.507 [2024-11-05 16:30:21.394227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.507 [2024-11-05 16:30:21.509049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.767 [2024-11-05 16:30:21.716433] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:08.767 [2024-11-05 16:30:21.716596] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:09.026 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:09.026 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # return 0 00:16:09.026 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:09.026 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:09.026 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.026 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.287 BaseBdev1_malloc 00:16:09.287 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.287 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:09.287 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.287 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.287 [2024-11-05 16:30:22.131815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:09.287 [2024-11-05 16:30:22.131884] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.287 [2024-11-05 16:30:22.131908] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:09.287 [2024-11-05 16:30:22.131919] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.287 [2024-11-05 16:30:22.134175] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.287 [2024-11-05 16:30:22.134218] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:09.287 BaseBdev1 00:16:09.287 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.287 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:09.287 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:09.287 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.287 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.287 BaseBdev2_malloc 00:16:09.287 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.287 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:09.287 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.287 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.287 [2024-11-05 16:30:22.188107] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:09.287 [2024-11-05 16:30:22.188192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.287 [2024-11-05 16:30:22.188213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:09.287 [2024-11-05 16:30:22.188224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.287 [2024-11-05 16:30:22.190719] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.287 [2024-11-05 16:30:22.190805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:09.287 BaseBdev2 00:16:09.287 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.287 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:09.287 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:09.287 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.287 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.287 BaseBdev3_malloc 00:16:09.287 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.287 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:09.287 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.287 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.287 [2024-11-05 16:30:22.258446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:09.287 [2024-11-05 16:30:22.258558] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.287 [2024-11-05 16:30:22.258585] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:09.287 [2024-11-05 16:30:22.258596] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.287 [2024-11-05 16:30:22.260712] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.287 [2024-11-05 16:30:22.260754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:09.287 BaseBdev3 00:16:09.287 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.287 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:09.287 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:09.287 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.287 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.287 BaseBdev4_malloc 00:16:09.287 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.287 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:09.287 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.287 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.287 [2024-11-05 16:30:22.314980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:09.287 [2024-11-05 16:30:22.315055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.287 [2024-11-05 16:30:22.315082] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:09.287 [2024-11-05 16:30:22.315093] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.287 [2024-11-05 16:30:22.317302] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.287 [2024-11-05 16:30:22.317351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:09.287 BaseBdev4 00:16:09.287 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.287 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:09.287 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.287 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.287 spare_malloc 00:16:09.287 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.287 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:09.287 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.287 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.548 spare_delay 00:16:09.548 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.548 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:09.548 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.548 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.548 [2024-11-05 16:30:22.386437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:09.548 [2024-11-05 16:30:22.386528] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.548 [2024-11-05 16:30:22.386555] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:09.548 [2024-11-05 16:30:22.386567] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.548 [2024-11-05 16:30:22.388860] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.548 [2024-11-05 16:30:22.388911] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:09.548 spare 00:16:09.548 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.548 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:09.548 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.548 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.548 [2024-11-05 16:30:22.398468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:09.548 [2024-11-05 16:30:22.400392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:09.548 [2024-11-05 16:30:22.400495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:09.548 [2024-11-05 16:30:22.400571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:09.548 [2024-11-05 16:30:22.400672] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:09.548 [2024-11-05 16:30:22.400687] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:09.548 [2024-11-05 16:30:22.401009] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:09.548 [2024-11-05 16:30:22.401202] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:09.548 [2024-11-05 16:30:22.401216] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:09.548 [2024-11-05 16:30:22.401410] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:09.548 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.548 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:09.548 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:09.548 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:09.548 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:09.548 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:09.548 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:09.548 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.548 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.548 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.548 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.548 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.548 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.548 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.548 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.548 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.548 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.548 "name": "raid_bdev1", 00:16:09.548 "uuid": "a6a50e97-052b-4d8a-a019-bd864aba18ca", 00:16:09.548 "strip_size_kb": 0, 00:16:09.548 "state": "online", 00:16:09.548 "raid_level": "raid1", 00:16:09.548 "superblock": false, 00:16:09.548 "num_base_bdevs": 4, 00:16:09.548 "num_base_bdevs_discovered": 4, 00:16:09.548 "num_base_bdevs_operational": 4, 00:16:09.548 "base_bdevs_list": [ 00:16:09.548 { 00:16:09.548 "name": "BaseBdev1", 00:16:09.548 "uuid": "b10a5bed-b3d7-5688-9941-72c4332a045a", 00:16:09.548 "is_configured": true, 00:16:09.548 "data_offset": 0, 00:16:09.548 "data_size": 65536 00:16:09.548 }, 00:16:09.548 { 00:16:09.548 "name": "BaseBdev2", 00:16:09.548 "uuid": "afffb665-3a4c-5ce3-ba21-0332a9dd0b8c", 00:16:09.548 "is_configured": true, 00:16:09.548 "data_offset": 0, 00:16:09.548 "data_size": 65536 00:16:09.548 }, 00:16:09.548 { 00:16:09.548 "name": "BaseBdev3", 00:16:09.548 "uuid": "354b1f2b-5ae3-55eb-8579-41d54036743f", 00:16:09.548 "is_configured": true, 00:16:09.548 "data_offset": 0, 00:16:09.548 "data_size": 65536 00:16:09.548 }, 00:16:09.548 { 00:16:09.548 "name": "BaseBdev4", 00:16:09.548 "uuid": "84a22d7c-e431-5eb8-af1d-56dc894eb253", 00:16:09.548 "is_configured": true, 00:16:09.548 "data_offset": 0, 00:16:09.548 "data_size": 65536 00:16:09.548 } 00:16:09.548 ] 00:16:09.548 }' 00:16:09.548 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.548 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.808 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:09.808 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:09.808 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.808 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.808 [2024-11-05 16:30:22.885994] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:10.066 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.066 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:16:10.066 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.066 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:10.066 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.066 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.067 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.067 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:10.067 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:16:10.067 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:10.067 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:10.067 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.067 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.067 [2024-11-05 16:30:22.969470] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:10.067 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.067 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:10.067 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.067 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.067 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:10.067 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:10.067 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:10.067 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.067 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.067 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.067 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.067 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.067 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.067 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.067 16:30:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.067 16:30:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.067 16:30:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.067 "name": "raid_bdev1", 00:16:10.067 "uuid": "a6a50e97-052b-4d8a-a019-bd864aba18ca", 00:16:10.067 "strip_size_kb": 0, 00:16:10.067 "state": "online", 00:16:10.067 "raid_level": "raid1", 00:16:10.067 "superblock": false, 00:16:10.067 "num_base_bdevs": 4, 00:16:10.067 "num_base_bdevs_discovered": 3, 00:16:10.067 "num_base_bdevs_operational": 3, 00:16:10.067 "base_bdevs_list": [ 00:16:10.067 { 00:16:10.067 "name": null, 00:16:10.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.067 "is_configured": false, 00:16:10.067 "data_offset": 0, 00:16:10.067 "data_size": 65536 00:16:10.067 }, 00:16:10.067 { 00:16:10.067 "name": "BaseBdev2", 00:16:10.067 "uuid": "afffb665-3a4c-5ce3-ba21-0332a9dd0b8c", 00:16:10.067 "is_configured": true, 00:16:10.067 "data_offset": 0, 00:16:10.067 "data_size": 65536 00:16:10.067 }, 00:16:10.067 { 00:16:10.067 "name": "BaseBdev3", 00:16:10.067 "uuid": "354b1f2b-5ae3-55eb-8579-41d54036743f", 00:16:10.067 "is_configured": true, 00:16:10.067 "data_offset": 0, 00:16:10.067 "data_size": 65536 00:16:10.067 }, 00:16:10.067 { 00:16:10.067 "name": "BaseBdev4", 00:16:10.067 "uuid": "84a22d7c-e431-5eb8-af1d-56dc894eb253", 00:16:10.067 "is_configured": true, 00:16:10.067 "data_offset": 0, 00:16:10.067 "data_size": 65536 00:16:10.067 } 00:16:10.067 ] 00:16:10.067 }' 00:16:10.067 16:30:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.067 16:30:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.067 [2024-11-05 16:30:23.090459] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:10.067 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:10.067 Zero copy mechanism will not be used. 00:16:10.067 Running I/O for 60 seconds... 00:16:10.326 16:30:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:10.326 16:30:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.326 16:30:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.585 [2024-11-05 16:30:23.420628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:10.585 16:30:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.585 16:30:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:10.585 [2024-11-05 16:30:23.474709] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:16:10.586 [2024-11-05 16:30:23.476865] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:10.586 [2024-11-05 16:30:23.595694] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:10.586 [2024-11-05 16:30:23.597341] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:10.862 [2024-11-05 16:30:23.805896] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:10.862 [2024-11-05 16:30:23.806860] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:11.121 135.00 IOPS, 405.00 MiB/s [2024-11-05T16:30:24.209Z] [2024-11-05 16:30:24.158911] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:11.121 [2024-11-05 16:30:24.159641] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:11.380 [2024-11-05 16:30:24.372541] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:11.380 [2024-11-05 16:30:24.372997] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:11.639 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:11.639 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.639 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:11.639 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:11.639 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.639 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.639 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.639 16:30:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.639 16:30:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.639 16:30:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.639 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.639 "name": "raid_bdev1", 00:16:11.639 "uuid": "a6a50e97-052b-4d8a-a019-bd864aba18ca", 00:16:11.639 "strip_size_kb": 0, 00:16:11.639 "state": "online", 00:16:11.639 "raid_level": "raid1", 00:16:11.639 "superblock": false, 00:16:11.639 "num_base_bdevs": 4, 00:16:11.639 "num_base_bdevs_discovered": 4, 00:16:11.639 "num_base_bdevs_operational": 4, 00:16:11.639 "process": { 00:16:11.639 "type": "rebuild", 00:16:11.639 "target": "spare", 00:16:11.639 "progress": { 00:16:11.639 "blocks": 10240, 00:16:11.639 "percent": 15 00:16:11.639 } 00:16:11.639 }, 00:16:11.639 "base_bdevs_list": [ 00:16:11.639 { 00:16:11.639 "name": "spare", 00:16:11.639 "uuid": "ee071a0e-706d-5917-b73f-89f21b2594eb", 00:16:11.639 "is_configured": true, 00:16:11.639 "data_offset": 0, 00:16:11.639 "data_size": 65536 00:16:11.639 }, 00:16:11.639 { 00:16:11.639 "name": "BaseBdev2", 00:16:11.639 "uuid": "afffb665-3a4c-5ce3-ba21-0332a9dd0b8c", 00:16:11.639 "is_configured": true, 00:16:11.639 "data_offset": 0, 00:16:11.639 "data_size": 65536 00:16:11.639 }, 00:16:11.639 { 00:16:11.639 "name": "BaseBdev3", 00:16:11.639 "uuid": "354b1f2b-5ae3-55eb-8579-41d54036743f", 00:16:11.639 "is_configured": true, 00:16:11.639 "data_offset": 0, 00:16:11.639 "data_size": 65536 00:16:11.639 }, 00:16:11.639 { 00:16:11.639 "name": "BaseBdev4", 00:16:11.639 "uuid": "84a22d7c-e431-5eb8-af1d-56dc894eb253", 00:16:11.639 "is_configured": true, 00:16:11.639 "data_offset": 0, 00:16:11.639 "data_size": 65536 00:16:11.639 } 00:16:11.639 ] 00:16:11.639 }' 00:16:11.639 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.639 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:11.639 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.639 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:11.639 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:11.639 16:30:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.639 16:30:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.639 [2024-11-05 16:30:24.642555] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:11.639 [2024-11-05 16:30:24.724425] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:11.899 [2024-11-05 16:30:24.732621] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:11.899 [2024-11-05 16:30:24.744421] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.899 [2024-11-05 16:30:24.744554] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:11.899 [2024-11-05 16:30:24.744574] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:11.899 [2024-11-05 16:30:24.786340] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:16:11.899 16:30:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.899 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:11.899 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.899 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.899 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:11.899 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:11.899 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:11.899 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.899 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.899 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.899 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.899 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.899 16:30:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.899 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.899 16:30:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.899 16:30:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.899 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.899 "name": "raid_bdev1", 00:16:11.899 "uuid": "a6a50e97-052b-4d8a-a019-bd864aba18ca", 00:16:11.899 "strip_size_kb": 0, 00:16:11.899 "state": "online", 00:16:11.899 "raid_level": "raid1", 00:16:11.899 "superblock": false, 00:16:11.899 "num_base_bdevs": 4, 00:16:11.899 "num_base_bdevs_discovered": 3, 00:16:11.899 "num_base_bdevs_operational": 3, 00:16:11.899 "base_bdevs_list": [ 00:16:11.899 { 00:16:11.899 "name": null, 00:16:11.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.899 "is_configured": false, 00:16:11.899 "data_offset": 0, 00:16:11.899 "data_size": 65536 00:16:11.899 }, 00:16:11.899 { 00:16:11.899 "name": "BaseBdev2", 00:16:11.899 "uuid": "afffb665-3a4c-5ce3-ba21-0332a9dd0b8c", 00:16:11.899 "is_configured": true, 00:16:11.899 "data_offset": 0, 00:16:11.899 "data_size": 65536 00:16:11.900 }, 00:16:11.900 { 00:16:11.900 "name": "BaseBdev3", 00:16:11.900 "uuid": "354b1f2b-5ae3-55eb-8579-41d54036743f", 00:16:11.900 "is_configured": true, 00:16:11.900 "data_offset": 0, 00:16:11.900 "data_size": 65536 00:16:11.900 }, 00:16:11.900 { 00:16:11.900 "name": "BaseBdev4", 00:16:11.900 "uuid": "84a22d7c-e431-5eb8-af1d-56dc894eb253", 00:16:11.900 "is_configured": true, 00:16:11.900 "data_offset": 0, 00:16:11.900 "data_size": 65536 00:16:11.900 } 00:16:11.900 ] 00:16:11.900 }' 00:16:11.900 16:30:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.900 16:30:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.422 119.50 IOPS, 358.50 MiB/s [2024-11-05T16:30:25.510Z] 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:12.422 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.422 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:12.423 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:12.423 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.423 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.423 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.423 16:30:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.423 16:30:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.423 16:30:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.423 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.423 "name": "raid_bdev1", 00:16:12.423 "uuid": "a6a50e97-052b-4d8a-a019-bd864aba18ca", 00:16:12.423 "strip_size_kb": 0, 00:16:12.423 "state": "online", 00:16:12.423 "raid_level": "raid1", 00:16:12.423 "superblock": false, 00:16:12.423 "num_base_bdevs": 4, 00:16:12.423 "num_base_bdevs_discovered": 3, 00:16:12.423 "num_base_bdevs_operational": 3, 00:16:12.423 "base_bdevs_list": [ 00:16:12.423 { 00:16:12.423 "name": null, 00:16:12.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.423 "is_configured": false, 00:16:12.423 "data_offset": 0, 00:16:12.423 "data_size": 65536 00:16:12.423 }, 00:16:12.423 { 00:16:12.423 "name": "BaseBdev2", 00:16:12.423 "uuid": "afffb665-3a4c-5ce3-ba21-0332a9dd0b8c", 00:16:12.423 "is_configured": true, 00:16:12.423 "data_offset": 0, 00:16:12.423 "data_size": 65536 00:16:12.423 }, 00:16:12.423 { 00:16:12.423 "name": "BaseBdev3", 00:16:12.423 "uuid": "354b1f2b-5ae3-55eb-8579-41d54036743f", 00:16:12.423 "is_configured": true, 00:16:12.423 "data_offset": 0, 00:16:12.423 "data_size": 65536 00:16:12.423 }, 00:16:12.423 { 00:16:12.423 "name": "BaseBdev4", 00:16:12.423 "uuid": "84a22d7c-e431-5eb8-af1d-56dc894eb253", 00:16:12.423 "is_configured": true, 00:16:12.423 "data_offset": 0, 00:16:12.423 "data_size": 65536 00:16:12.423 } 00:16:12.423 ] 00:16:12.423 }' 00:16:12.423 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.423 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:12.423 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.423 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:12.423 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:12.423 16:30:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.423 16:30:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.423 [2024-11-05 16:30:25.474823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:12.682 16:30:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.682 16:30:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:12.682 [2024-11-05 16:30:25.545195] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:12.682 [2024-11-05 16:30:25.547367] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:12.682 [2024-11-05 16:30:25.671125] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:12.682 [2024-11-05 16:30:25.671774] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:12.941 [2024-11-05 16:30:25.788281] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:12.941 [2024-11-05 16:30:25.789288] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:13.200 130.00 IOPS, 390.00 MiB/s [2024-11-05T16:30:26.288Z] [2024-11-05 16:30:26.138224] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:13.460 [2024-11-05 16:30:26.341292] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:13.460 [2024-11-05 16:30:26.341712] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:13.460 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:13.460 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.460 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:13.460 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:13.460 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.460 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.460 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.460 16:30:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.460 16:30:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.460 16:30:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.720 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.720 "name": "raid_bdev1", 00:16:13.720 "uuid": "a6a50e97-052b-4d8a-a019-bd864aba18ca", 00:16:13.720 "strip_size_kb": 0, 00:16:13.720 "state": "online", 00:16:13.720 "raid_level": "raid1", 00:16:13.720 "superblock": false, 00:16:13.720 "num_base_bdevs": 4, 00:16:13.720 "num_base_bdevs_discovered": 4, 00:16:13.720 "num_base_bdevs_operational": 4, 00:16:13.720 "process": { 00:16:13.720 "type": "rebuild", 00:16:13.720 "target": "spare", 00:16:13.720 "progress": { 00:16:13.720 "blocks": 12288, 00:16:13.720 "percent": 18 00:16:13.720 } 00:16:13.720 }, 00:16:13.720 "base_bdevs_list": [ 00:16:13.720 { 00:16:13.720 "name": "spare", 00:16:13.720 "uuid": "ee071a0e-706d-5917-b73f-89f21b2594eb", 00:16:13.720 "is_configured": true, 00:16:13.720 "data_offset": 0, 00:16:13.720 "data_size": 65536 00:16:13.720 }, 00:16:13.720 { 00:16:13.720 "name": "BaseBdev2", 00:16:13.720 "uuid": "afffb665-3a4c-5ce3-ba21-0332a9dd0b8c", 00:16:13.720 "is_configured": true, 00:16:13.720 "data_offset": 0, 00:16:13.720 "data_size": 65536 00:16:13.720 }, 00:16:13.720 { 00:16:13.720 "name": "BaseBdev3", 00:16:13.720 "uuid": "354b1f2b-5ae3-55eb-8579-41d54036743f", 00:16:13.720 "is_configured": true, 00:16:13.720 "data_offset": 0, 00:16:13.720 "data_size": 65536 00:16:13.720 }, 00:16:13.720 { 00:16:13.720 "name": "BaseBdev4", 00:16:13.720 "uuid": "84a22d7c-e431-5eb8-af1d-56dc894eb253", 00:16:13.720 "is_configured": true, 00:16:13.720 "data_offset": 0, 00:16:13.720 "data_size": 65536 00:16:13.720 } 00:16:13.720 ] 00:16:13.720 }' 00:16:13.720 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.720 [2024-11-05 16:30:26.567325] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:13.720 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:13.720 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.720 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:13.720 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:13.720 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:13.721 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:13.721 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:13.721 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:13.721 16:30:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.721 16:30:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.721 [2024-11-05 16:30:26.659268] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:13.721 [2024-11-05 16:30:26.794667] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:13.721 [2024-11-05 16:30:26.795045] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:13.721 [2024-11-05 16:30:26.796736] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:16:13.721 [2024-11-05 16:30:26.796774] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:16:13.721 16:30:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.721 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:13.721 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:13.721 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:13.721 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.721 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:13.721 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:13.721 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.980 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.980 16:30:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.980 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.980 16:30:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.980 16:30:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.980 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.980 "name": "raid_bdev1", 00:16:13.980 "uuid": "a6a50e97-052b-4d8a-a019-bd864aba18ca", 00:16:13.980 "strip_size_kb": 0, 00:16:13.980 "state": "online", 00:16:13.980 "raid_level": "raid1", 00:16:13.980 "superblock": false, 00:16:13.980 "num_base_bdevs": 4, 00:16:13.980 "num_base_bdevs_discovered": 3, 00:16:13.980 "num_base_bdevs_operational": 3, 00:16:13.980 "process": { 00:16:13.980 "type": "rebuild", 00:16:13.980 "target": "spare", 00:16:13.980 "progress": { 00:16:13.980 "blocks": 16384, 00:16:13.980 "percent": 25 00:16:13.980 } 00:16:13.980 }, 00:16:13.980 "base_bdevs_list": [ 00:16:13.980 { 00:16:13.980 "name": "spare", 00:16:13.980 "uuid": "ee071a0e-706d-5917-b73f-89f21b2594eb", 00:16:13.980 "is_configured": true, 00:16:13.980 "data_offset": 0, 00:16:13.980 "data_size": 65536 00:16:13.980 }, 00:16:13.980 { 00:16:13.980 "name": null, 00:16:13.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.980 "is_configured": false, 00:16:13.980 "data_offset": 0, 00:16:13.980 "data_size": 65536 00:16:13.980 }, 00:16:13.980 { 00:16:13.980 "name": "BaseBdev3", 00:16:13.980 "uuid": "354b1f2b-5ae3-55eb-8579-41d54036743f", 00:16:13.980 "is_configured": true, 00:16:13.980 "data_offset": 0, 00:16:13.980 "data_size": 65536 00:16:13.980 }, 00:16:13.980 { 00:16:13.980 "name": "BaseBdev4", 00:16:13.980 "uuid": "84a22d7c-e431-5eb8-af1d-56dc894eb253", 00:16:13.980 "is_configured": true, 00:16:13.980 "data_offset": 0, 00:16:13.980 "data_size": 65536 00:16:13.980 } 00:16:13.980 ] 00:16:13.980 }' 00:16:13.980 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.980 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:13.980 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.980 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:13.980 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=498 00:16:13.980 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:13.980 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:13.980 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.980 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:13.980 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:13.980 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.980 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.980 16:30:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.980 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.980 16:30:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.980 16:30:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.980 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.980 "name": "raid_bdev1", 00:16:13.980 "uuid": "a6a50e97-052b-4d8a-a019-bd864aba18ca", 00:16:13.980 "strip_size_kb": 0, 00:16:13.980 "state": "online", 00:16:13.980 "raid_level": "raid1", 00:16:13.980 "superblock": false, 00:16:13.980 "num_base_bdevs": 4, 00:16:13.980 "num_base_bdevs_discovered": 3, 00:16:13.981 "num_base_bdevs_operational": 3, 00:16:13.981 "process": { 00:16:13.981 "type": "rebuild", 00:16:13.981 "target": "spare", 00:16:13.981 "progress": { 00:16:13.981 "blocks": 18432, 00:16:13.981 "percent": 28 00:16:13.981 } 00:16:13.981 }, 00:16:13.981 "base_bdevs_list": [ 00:16:13.981 { 00:16:13.981 "name": "spare", 00:16:13.981 "uuid": "ee071a0e-706d-5917-b73f-89f21b2594eb", 00:16:13.981 "is_configured": true, 00:16:13.981 "data_offset": 0, 00:16:13.981 "data_size": 65536 00:16:13.981 }, 00:16:13.981 { 00:16:13.981 "name": null, 00:16:13.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.981 "is_configured": false, 00:16:13.981 "data_offset": 0, 00:16:13.981 "data_size": 65536 00:16:13.981 }, 00:16:13.981 { 00:16:13.981 "name": "BaseBdev3", 00:16:13.981 "uuid": "354b1f2b-5ae3-55eb-8579-41d54036743f", 00:16:13.981 "is_configured": true, 00:16:13.981 "data_offset": 0, 00:16:13.981 "data_size": 65536 00:16:13.981 }, 00:16:13.981 { 00:16:13.981 "name": "BaseBdev4", 00:16:13.981 "uuid": "84a22d7c-e431-5eb8-af1d-56dc894eb253", 00:16:13.981 "is_configured": true, 00:16:13.981 "data_offset": 0, 00:16:13.981 "data_size": 65536 00:16:13.981 } 00:16:13.981 ] 00:16:13.981 }' 00:16:13.981 16:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.981 16:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:13.981 16:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.981 16:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:13.981 16:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:14.239 111.75 IOPS, 335.25 MiB/s [2024-11-05T16:30:27.327Z] [2024-11-05 16:30:27.156339] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:16:14.497 [2024-11-05 16:30:27.495393] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:16:14.497 [2024-11-05 16:30:27.496061] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:16:15.109 [2024-11-05 16:30:27.967609] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:16:15.109 16:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:15.109 16:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:15.109 16:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.109 16:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:15.109 16:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:15.109 16:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.109 16:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.109 16:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.109 16:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.109 16:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.109 101.00 IOPS, 303.00 MiB/s [2024-11-05T16:30:28.197Z] 16:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.109 16:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.109 "name": "raid_bdev1", 00:16:15.109 "uuid": "a6a50e97-052b-4d8a-a019-bd864aba18ca", 00:16:15.109 "strip_size_kb": 0, 00:16:15.109 "state": "online", 00:16:15.109 "raid_level": "raid1", 00:16:15.109 "superblock": false, 00:16:15.109 "num_base_bdevs": 4, 00:16:15.109 "num_base_bdevs_discovered": 3, 00:16:15.109 "num_base_bdevs_operational": 3, 00:16:15.109 "process": { 00:16:15.109 "type": "rebuild", 00:16:15.109 "target": "spare", 00:16:15.109 "progress": { 00:16:15.109 "blocks": 34816, 00:16:15.109 "percent": 53 00:16:15.109 } 00:16:15.109 }, 00:16:15.109 "base_bdevs_list": [ 00:16:15.109 { 00:16:15.109 "name": "spare", 00:16:15.109 "uuid": "ee071a0e-706d-5917-b73f-89f21b2594eb", 00:16:15.109 "is_configured": true, 00:16:15.109 "data_offset": 0, 00:16:15.109 "data_size": 65536 00:16:15.109 }, 00:16:15.109 { 00:16:15.109 "name": null, 00:16:15.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.109 "is_configured": false, 00:16:15.109 "data_offset": 0, 00:16:15.109 "data_size": 65536 00:16:15.109 }, 00:16:15.109 { 00:16:15.109 "name": "BaseBdev3", 00:16:15.109 "uuid": "354b1f2b-5ae3-55eb-8579-41d54036743f", 00:16:15.109 "is_configured": true, 00:16:15.109 "data_offset": 0, 00:16:15.109 "data_size": 65536 00:16:15.109 }, 00:16:15.109 { 00:16:15.109 "name": "BaseBdev4", 00:16:15.109 "uuid": "84a22d7c-e431-5eb8-af1d-56dc894eb253", 00:16:15.109 "is_configured": true, 00:16:15.109 "data_offset": 0, 00:16:15.109 "data_size": 65536 00:16:15.109 } 00:16:15.109 ] 00:16:15.109 }' 00:16:15.109 16:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.109 16:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:15.109 16:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.109 16:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:15.109 16:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:15.109 [2024-11-05 16:30:28.194662] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:16:15.109 [2024-11-05 16:30:28.195552] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:16:15.369 [2024-11-05 16:30:28.412325] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:16:15.938 [2024-11-05 16:30:28.988472] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:16:16.198 89.50 IOPS, 268.50 MiB/s [2024-11-05T16:30:29.286Z] 16:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:16.198 16:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:16.198 16:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.198 16:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:16.198 16:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:16.198 16:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.198 16:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.198 16:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.198 16:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:16.198 16:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.198 16:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.198 16:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.198 "name": "raid_bdev1", 00:16:16.198 "uuid": "a6a50e97-052b-4d8a-a019-bd864aba18ca", 00:16:16.198 "strip_size_kb": 0, 00:16:16.198 "state": "online", 00:16:16.198 "raid_level": "raid1", 00:16:16.198 "superblock": false, 00:16:16.198 "num_base_bdevs": 4, 00:16:16.198 "num_base_bdevs_discovered": 3, 00:16:16.198 "num_base_bdevs_operational": 3, 00:16:16.198 "process": { 00:16:16.198 "type": "rebuild", 00:16:16.198 "target": "spare", 00:16:16.198 "progress": { 00:16:16.198 "blocks": 51200, 00:16:16.198 "percent": 78 00:16:16.198 } 00:16:16.198 }, 00:16:16.198 "base_bdevs_list": [ 00:16:16.198 { 00:16:16.198 "name": "spare", 00:16:16.198 "uuid": "ee071a0e-706d-5917-b73f-89f21b2594eb", 00:16:16.198 "is_configured": true, 00:16:16.198 "data_offset": 0, 00:16:16.198 "data_size": 65536 00:16:16.198 }, 00:16:16.198 { 00:16:16.198 "name": null, 00:16:16.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.198 "is_configured": false, 00:16:16.198 "data_offset": 0, 00:16:16.198 "data_size": 65536 00:16:16.198 }, 00:16:16.198 { 00:16:16.198 "name": "BaseBdev3", 00:16:16.198 "uuid": "354b1f2b-5ae3-55eb-8579-41d54036743f", 00:16:16.198 "is_configured": true, 00:16:16.198 "data_offset": 0, 00:16:16.198 "data_size": 65536 00:16:16.198 }, 00:16:16.198 { 00:16:16.198 "name": "BaseBdev4", 00:16:16.198 "uuid": "84a22d7c-e431-5eb8-af1d-56dc894eb253", 00:16:16.198 "is_configured": true, 00:16:16.198 "data_offset": 0, 00:16:16.198 "data_size": 65536 00:16:16.198 } 00:16:16.198 ] 00:16:16.198 }' 00:16:16.198 16:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.198 16:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:16.198 16:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.457 16:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:16.457 16:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:17.042 [2024-11-05 16:30:29.830960] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:17.042 [2024-11-05 16:30:29.937067] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:17.042 [2024-11-05 16:30:29.940277] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:17.303 82.71 IOPS, 248.14 MiB/s [2024-11-05T16:30:30.391Z] 16:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:17.303 16:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:17.303 16:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.303 16:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:17.303 16:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:17.303 16:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.303 16:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.303 16:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.303 16:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.303 16:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.303 16:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.303 16:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.303 "name": "raid_bdev1", 00:16:17.303 "uuid": "a6a50e97-052b-4d8a-a019-bd864aba18ca", 00:16:17.303 "strip_size_kb": 0, 00:16:17.303 "state": "online", 00:16:17.303 "raid_level": "raid1", 00:16:17.303 "superblock": false, 00:16:17.303 "num_base_bdevs": 4, 00:16:17.303 "num_base_bdevs_discovered": 3, 00:16:17.303 "num_base_bdevs_operational": 3, 00:16:17.303 "base_bdevs_list": [ 00:16:17.303 { 00:16:17.303 "name": "spare", 00:16:17.303 "uuid": "ee071a0e-706d-5917-b73f-89f21b2594eb", 00:16:17.303 "is_configured": true, 00:16:17.303 "data_offset": 0, 00:16:17.303 "data_size": 65536 00:16:17.303 }, 00:16:17.303 { 00:16:17.303 "name": null, 00:16:17.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.303 "is_configured": false, 00:16:17.303 "data_offset": 0, 00:16:17.303 "data_size": 65536 00:16:17.303 }, 00:16:17.303 { 00:16:17.303 "name": "BaseBdev3", 00:16:17.303 "uuid": "354b1f2b-5ae3-55eb-8579-41d54036743f", 00:16:17.303 "is_configured": true, 00:16:17.303 "data_offset": 0, 00:16:17.303 "data_size": 65536 00:16:17.303 }, 00:16:17.303 { 00:16:17.303 "name": "BaseBdev4", 00:16:17.303 "uuid": "84a22d7c-e431-5eb8-af1d-56dc894eb253", 00:16:17.303 "is_configured": true, 00:16:17.303 "data_offset": 0, 00:16:17.303 "data_size": 65536 00:16:17.303 } 00:16:17.303 ] 00:16:17.303 }' 00:16:17.303 16:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.563 16:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:17.563 16:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.563 16:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:17.563 16:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:16:17.563 16:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:17.563 16:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.563 16:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:17.563 16:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:17.563 16:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.563 16:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.563 16:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.563 16:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.563 16:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.563 16:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.563 16:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.563 "name": "raid_bdev1", 00:16:17.563 "uuid": "a6a50e97-052b-4d8a-a019-bd864aba18ca", 00:16:17.563 "strip_size_kb": 0, 00:16:17.563 "state": "online", 00:16:17.563 "raid_level": "raid1", 00:16:17.563 "superblock": false, 00:16:17.563 "num_base_bdevs": 4, 00:16:17.563 "num_base_bdevs_discovered": 3, 00:16:17.563 "num_base_bdevs_operational": 3, 00:16:17.563 "base_bdevs_list": [ 00:16:17.563 { 00:16:17.563 "name": "spare", 00:16:17.563 "uuid": "ee071a0e-706d-5917-b73f-89f21b2594eb", 00:16:17.563 "is_configured": true, 00:16:17.563 "data_offset": 0, 00:16:17.563 "data_size": 65536 00:16:17.563 }, 00:16:17.563 { 00:16:17.563 "name": null, 00:16:17.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.563 "is_configured": false, 00:16:17.563 "data_offset": 0, 00:16:17.563 "data_size": 65536 00:16:17.563 }, 00:16:17.563 { 00:16:17.563 "name": "BaseBdev3", 00:16:17.563 "uuid": "354b1f2b-5ae3-55eb-8579-41d54036743f", 00:16:17.563 "is_configured": true, 00:16:17.563 "data_offset": 0, 00:16:17.563 "data_size": 65536 00:16:17.563 }, 00:16:17.563 { 00:16:17.563 "name": "BaseBdev4", 00:16:17.563 "uuid": "84a22d7c-e431-5eb8-af1d-56dc894eb253", 00:16:17.563 "is_configured": true, 00:16:17.563 "data_offset": 0, 00:16:17.563 "data_size": 65536 00:16:17.563 } 00:16:17.563 ] 00:16:17.563 }' 00:16:17.563 16:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.563 16:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:17.563 16:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.563 16:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:17.563 16:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:17.563 16:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:17.563 16:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:17.563 16:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:17.563 16:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:17.563 16:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:17.563 16:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.563 16:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.563 16:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.563 16:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.563 16:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.563 16:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.563 16:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.563 16:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.563 16:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.563 16:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.563 "name": "raid_bdev1", 00:16:17.563 "uuid": "a6a50e97-052b-4d8a-a019-bd864aba18ca", 00:16:17.563 "strip_size_kb": 0, 00:16:17.563 "state": "online", 00:16:17.563 "raid_level": "raid1", 00:16:17.563 "superblock": false, 00:16:17.563 "num_base_bdevs": 4, 00:16:17.563 "num_base_bdevs_discovered": 3, 00:16:17.563 "num_base_bdevs_operational": 3, 00:16:17.563 "base_bdevs_list": [ 00:16:17.563 { 00:16:17.563 "name": "spare", 00:16:17.563 "uuid": "ee071a0e-706d-5917-b73f-89f21b2594eb", 00:16:17.563 "is_configured": true, 00:16:17.563 "data_offset": 0, 00:16:17.563 "data_size": 65536 00:16:17.563 }, 00:16:17.563 { 00:16:17.563 "name": null, 00:16:17.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.563 "is_configured": false, 00:16:17.563 "data_offset": 0, 00:16:17.563 "data_size": 65536 00:16:17.563 }, 00:16:17.563 { 00:16:17.563 "name": "BaseBdev3", 00:16:17.563 "uuid": "354b1f2b-5ae3-55eb-8579-41d54036743f", 00:16:17.563 "is_configured": true, 00:16:17.563 "data_offset": 0, 00:16:17.563 "data_size": 65536 00:16:17.563 }, 00:16:17.563 { 00:16:17.563 "name": "BaseBdev4", 00:16:17.563 "uuid": "84a22d7c-e431-5eb8-af1d-56dc894eb253", 00:16:17.563 "is_configured": true, 00:16:17.563 "data_offset": 0, 00:16:17.563 "data_size": 65536 00:16:17.563 } 00:16:17.563 ] 00:16:17.563 }' 00:16:17.563 16:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.563 16:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.133 16:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:18.133 16:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.133 16:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.133 [2024-11-05 16:30:30.958408] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:18.133 [2024-11-05 16:30:30.958442] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:18.133 00:16:18.133 Latency(us) 00:16:18.133 [2024-11-05T16:30:31.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:18.133 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:18.133 raid_bdev1 : 7.96 78.13 234.40 0.00 0.00 17655.29 314.80 122715.44 00:16:18.133 [2024-11-05T16:30:31.221Z] =================================================================================================================== 00:16:18.133 [2024-11-05T16:30:31.221Z] Total : 78.13 234.40 0.00 0.00 17655.29 314.80 122715.44 00:16:18.133 { 00:16:18.133 "results": [ 00:16:18.133 { 00:16:18.133 "job": "raid_bdev1", 00:16:18.133 "core_mask": "0x1", 00:16:18.133 "workload": "randrw", 00:16:18.133 "percentage": 50, 00:16:18.133 "status": "finished", 00:16:18.133 "queue_depth": 2, 00:16:18.133 "io_size": 3145728, 00:16:18.133 "runtime": 7.960602, 00:16:18.133 "iops": 78.13479432836863, 00:16:18.133 "mibps": 234.40438298510588, 00:16:18.133 "io_failed": 0, 00:16:18.133 "io_timeout": 0, 00:16:18.133 "avg_latency_us": 17655.293289712015, 00:16:18.133 "min_latency_us": 314.80174672489085, 00:16:18.133 "max_latency_us": 122715.44454148471 00:16:18.133 } 00:16:18.133 ], 00:16:18.133 "core_count": 1 00:16:18.133 } 00:16:18.133 [2024-11-05 16:30:31.061933] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.133 [2024-11-05 16:30:31.061994] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:18.133 [2024-11-05 16:30:31.062108] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:18.133 [2024-11-05 16:30:31.062119] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:18.133 16:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.133 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:18.133 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.133 16:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.133 16:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.133 16:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.133 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:18.133 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:18.133 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:18.133 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:18.133 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:18.133 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:18.133 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:18.133 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:18.133 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:18.133 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:18.133 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:18.133 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:18.133 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:18.392 /dev/nbd0 00:16:18.392 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:18.392 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:18.392 16:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:18.392 16:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:16:18.392 16:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:18.392 16:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:18.392 16:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:18.392 16:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:16:18.392 16:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:18.392 16:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:18.392 16:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:18.392 1+0 records in 00:16:18.392 1+0 records out 00:16:18.392 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423575 s, 9.7 MB/s 00:16:18.392 16:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:18.392 16:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:16:18.392 16:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:18.392 16:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:18.392 16:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:16:18.392 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:18.392 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:18.392 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:18.392 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:16:18.392 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:16:18.392 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:18.392 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:16:18.392 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:16:18.392 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:18.392 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:16:18.392 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:18.392 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:18.392 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:18.392 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:18.392 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:18.392 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:18.392 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:16:18.652 /dev/nbd1 00:16:18.652 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:18.652 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:18.652 16:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:16:18.652 16:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:16:18.652 16:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:18.652 16:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:18.652 16:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:16:18.652 16:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:16:18.652 16:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:18.652 16:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:18.652 16:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:18.652 1+0 records in 00:16:18.652 1+0 records out 00:16:18.652 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366514 s, 11.2 MB/s 00:16:18.652 16:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:18.652 16:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:16:18.652 16:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:18.652 16:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:18.652 16:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:16:18.652 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:18.652 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:18.652 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:18.911 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:18.911 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:18.911 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:18.911 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:18.911 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:18.911 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:18.911 16:30:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:19.170 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:19.170 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:19.170 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:19.170 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:19.170 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:19.170 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:19.170 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:19.170 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:19.170 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:19.170 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:16:19.171 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:16:19.171 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:19.171 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:16:19.171 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:19.171 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:19.171 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:19.171 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:19.171 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:19.171 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:19.171 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:16:19.431 /dev/nbd1 00:16:19.431 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:19.431 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:19.431 16:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:16:19.431 16:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:16:19.431 16:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:19.431 16:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:19.431 16:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:16:19.431 16:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:16:19.431 16:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:19.431 16:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:19.431 16:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:19.431 1+0 records in 00:16:19.431 1+0 records out 00:16:19.431 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395857 s, 10.3 MB/s 00:16:19.431 16:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:19.431 16:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:16:19.431 16:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:19.431 16:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:19.431 16:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:16:19.431 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:19.431 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:19.431 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:19.431 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:19.431 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:19.431 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:19.431 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:19.431 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:19.431 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:19.431 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:19.691 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:19.691 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:19.691 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:19.691 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:19.691 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:19.691 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:19.691 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:19.691 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:19.691 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:19.691 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:19.691 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:19.691 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:19.691 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:19.691 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:19.691 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:19.950 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:19.950 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:19.950 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:19.950 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:19.950 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:19.950 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:19.950 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:19.950 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:19.950 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:19.950 16:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79089 00:16:19.950 16:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # '[' -z 79089 ']' 00:16:19.950 16:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # kill -0 79089 00:16:19.950 16:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # uname 00:16:19.950 16:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:19.950 16:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79089 00:16:19.950 killing process with pid 79089 00:16:19.950 Received shutdown signal, test time was about 9.862363 seconds 00:16:19.950 00:16:19.950 Latency(us) 00:16:19.950 [2024-11-05T16:30:33.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.950 [2024-11-05T16:30:33.038Z] =================================================================================================================== 00:16:19.950 [2024-11-05T16:30:33.039Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:19.951 16:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:19.951 16:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:19.951 16:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79089' 00:16:19.951 16:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # kill 79089 00:16:19.951 [2024-11-05 16:30:32.936306] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:19.951 16:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@976 -- # wait 79089 00:16:20.520 [2024-11-05 16:30:33.379826] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:21.900 00:16:21.900 real 0m13.484s 00:16:21.900 user 0m17.004s 00:16:21.900 sys 0m1.831s 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.900 ************************************ 00:16:21.900 END TEST raid_rebuild_test_io 00:16:21.900 ************************************ 00:16:21.900 16:30:34 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:16:21.900 16:30:34 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:16:21.900 16:30:34 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:21.900 16:30:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:21.900 ************************************ 00:16:21.900 START TEST raid_rebuild_test_sb_io 00:16:21.900 ************************************ 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 true true true 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79504 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79504 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # '[' -z 79504 ']' 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:21.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:21.900 16:30:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.900 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:21.900 Zero copy mechanism will not be used. 00:16:21.900 [2024-11-05 16:30:34.763734] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:16:21.900 [2024-11-05 16:30:34.763854] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79504 ] 00:16:21.900 [2024-11-05 16:30:34.937715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.160 [2024-11-05 16:30:35.055517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.422 [2024-11-05 16:30:35.269939] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:22.422 [2024-11-05 16:30:35.269980] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:22.692 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:22.692 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # return 0 00:16:22.692 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:22.692 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:22.692 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.692 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.692 BaseBdev1_malloc 00:16:22.692 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.692 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:22.692 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.692 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.692 [2024-11-05 16:30:35.668996] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:22.692 [2024-11-05 16:30:35.669069] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.692 [2024-11-05 16:30:35.669096] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:22.692 [2024-11-05 16:30:35.669109] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.692 [2024-11-05 16:30:35.671716] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.692 [2024-11-05 16:30:35.671812] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:22.692 BaseBdev1 00:16:22.692 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.692 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:22.692 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:22.692 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.692 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.692 BaseBdev2_malloc 00:16:22.692 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.692 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:22.692 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.692 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.692 [2024-11-05 16:30:35.727432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:22.692 [2024-11-05 16:30:35.727548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.692 [2024-11-05 16:30:35.727587] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:22.692 [2024-11-05 16:30:35.727639] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.692 [2024-11-05 16:30:35.730057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.692 [2024-11-05 16:30:35.730140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:22.692 BaseBdev2 00:16:22.692 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.692 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:22.692 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:22.692 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.692 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.952 BaseBdev3_malloc 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.952 [2024-11-05 16:30:35.799036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:22.952 [2024-11-05 16:30:35.799097] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.952 [2024-11-05 16:30:35.799119] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:22.952 [2024-11-05 16:30:35.799130] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.952 [2024-11-05 16:30:35.801456] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.952 [2024-11-05 16:30:35.801502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:22.952 BaseBdev3 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.952 BaseBdev4_malloc 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.952 [2024-11-05 16:30:35.853694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:22.952 [2024-11-05 16:30:35.853772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.952 [2024-11-05 16:30:35.853797] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:22.952 [2024-11-05 16:30:35.853809] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.952 [2024-11-05 16:30:35.856067] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.952 [2024-11-05 16:30:35.856119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:22.952 BaseBdev4 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.952 spare_malloc 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.952 spare_delay 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.952 [2024-11-05 16:30:35.918369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:22.952 [2024-11-05 16:30:35.918453] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.952 [2024-11-05 16:30:35.918478] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:22.952 [2024-11-05 16:30:35.918490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.952 [2024-11-05 16:30:35.920816] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.952 [2024-11-05 16:30:35.920863] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:22.952 spare 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.952 [2024-11-05 16:30:35.930432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:22.952 [2024-11-05 16:30:35.932844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:22.952 [2024-11-05 16:30:35.932932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:22.952 [2024-11-05 16:30:35.932993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:22.952 [2024-11-05 16:30:35.933217] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:22.952 [2024-11-05 16:30:35.933238] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:22.952 [2024-11-05 16:30:35.933573] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:22.952 [2024-11-05 16:30:35.933796] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:22.952 [2024-11-05 16:30:35.933808] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:22.952 [2024-11-05 16:30:35.933991] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.952 "name": "raid_bdev1", 00:16:22.952 "uuid": "ebbb2d8d-6270-49b8-890b-048f9ab2d4b5", 00:16:22.952 "strip_size_kb": 0, 00:16:22.952 "state": "online", 00:16:22.952 "raid_level": "raid1", 00:16:22.952 "superblock": true, 00:16:22.952 "num_base_bdevs": 4, 00:16:22.952 "num_base_bdevs_discovered": 4, 00:16:22.952 "num_base_bdevs_operational": 4, 00:16:22.952 "base_bdevs_list": [ 00:16:22.952 { 00:16:22.952 "name": "BaseBdev1", 00:16:22.952 "uuid": "8434dfbf-4b1c-5151-8f2e-1d35234c1b8f", 00:16:22.952 "is_configured": true, 00:16:22.952 "data_offset": 2048, 00:16:22.952 "data_size": 63488 00:16:22.952 }, 00:16:22.952 { 00:16:22.952 "name": "BaseBdev2", 00:16:22.952 "uuid": "964e88f2-8039-5126-bedb-ea0bdcc896b8", 00:16:22.952 "is_configured": true, 00:16:22.952 "data_offset": 2048, 00:16:22.952 "data_size": 63488 00:16:22.952 }, 00:16:22.952 { 00:16:22.952 "name": "BaseBdev3", 00:16:22.952 "uuid": "55d9b37c-3963-5d33-8f01-f3ba0828b8f4", 00:16:22.952 "is_configured": true, 00:16:22.952 "data_offset": 2048, 00:16:22.952 "data_size": 63488 00:16:22.952 }, 00:16:22.952 { 00:16:22.952 "name": "BaseBdev4", 00:16:22.952 "uuid": "bc7900a5-86bb-5c0c-b35d-d945bd741b03", 00:16:22.952 "is_configured": true, 00:16:22.952 "data_offset": 2048, 00:16:22.952 "data_size": 63488 00:16:22.952 } 00:16:22.952 ] 00:16:22.952 }' 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.952 16:30:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:23.520 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:23.520 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:23.520 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.520 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:23.520 [2024-11-05 16:30:36.370082] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:23.520 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.520 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:16:23.520 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.520 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.520 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:23.520 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:23.520 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.520 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:23.520 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:16:23.520 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:23.520 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:23.520 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.520 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:23.520 [2024-11-05 16:30:36.461477] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:23.520 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.520 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:23.520 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:23.520 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.520 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:23.520 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:23.520 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:23.520 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.520 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.520 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.520 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.520 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.520 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.520 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.520 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:23.520 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.520 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.520 "name": "raid_bdev1", 00:16:23.520 "uuid": "ebbb2d8d-6270-49b8-890b-048f9ab2d4b5", 00:16:23.520 "strip_size_kb": 0, 00:16:23.520 "state": "online", 00:16:23.520 "raid_level": "raid1", 00:16:23.520 "superblock": true, 00:16:23.520 "num_base_bdevs": 4, 00:16:23.520 "num_base_bdevs_discovered": 3, 00:16:23.520 "num_base_bdevs_operational": 3, 00:16:23.520 "base_bdevs_list": [ 00:16:23.520 { 00:16:23.520 "name": null, 00:16:23.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.520 "is_configured": false, 00:16:23.520 "data_offset": 0, 00:16:23.520 "data_size": 63488 00:16:23.520 }, 00:16:23.520 { 00:16:23.520 "name": "BaseBdev2", 00:16:23.520 "uuid": "964e88f2-8039-5126-bedb-ea0bdcc896b8", 00:16:23.520 "is_configured": true, 00:16:23.520 "data_offset": 2048, 00:16:23.520 "data_size": 63488 00:16:23.520 }, 00:16:23.520 { 00:16:23.520 "name": "BaseBdev3", 00:16:23.520 "uuid": "55d9b37c-3963-5d33-8f01-f3ba0828b8f4", 00:16:23.520 "is_configured": true, 00:16:23.520 "data_offset": 2048, 00:16:23.520 "data_size": 63488 00:16:23.520 }, 00:16:23.520 { 00:16:23.520 "name": "BaseBdev4", 00:16:23.520 "uuid": "bc7900a5-86bb-5c0c-b35d-d945bd741b03", 00:16:23.520 "is_configured": true, 00:16:23.520 "data_offset": 2048, 00:16:23.520 "data_size": 63488 00:16:23.520 } 00:16:23.520 ] 00:16:23.520 }' 00:16:23.520 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.520 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:23.520 [2024-11-05 16:30:36.598725] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:23.520 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:23.520 Zero copy mechanism will not be used. 00:16:23.520 Running I/O for 60 seconds... 00:16:24.089 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:24.089 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.089 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.089 [2024-11-05 16:30:36.892965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:24.089 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.089 16:30:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:24.089 [2024-11-05 16:30:36.984084] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:16:24.089 [2024-11-05 16:30:36.986330] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:24.089 [2024-11-05 16:30:37.110891] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:24.089 [2024-11-05 16:30:37.111593] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:24.348 [2024-11-05 16:30:37.320559] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:24.348 [2024-11-05 16:30:37.321036] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:24.607 141.00 IOPS, 423.00 MiB/s [2024-11-05T16:30:37.695Z] [2024-11-05 16:30:37.681336] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:24.866 [2024-11-05 16:30:37.929212] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:24.866 [2024-11-05 16:30:37.929696] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:25.126 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:25.126 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.126 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:25.126 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:25.126 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.126 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.126 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.126 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.126 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.126 16:30:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.126 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.126 "name": "raid_bdev1", 00:16:25.126 "uuid": "ebbb2d8d-6270-49b8-890b-048f9ab2d4b5", 00:16:25.126 "strip_size_kb": 0, 00:16:25.126 "state": "online", 00:16:25.126 "raid_level": "raid1", 00:16:25.126 "superblock": true, 00:16:25.126 "num_base_bdevs": 4, 00:16:25.126 "num_base_bdevs_discovered": 4, 00:16:25.126 "num_base_bdevs_operational": 4, 00:16:25.126 "process": { 00:16:25.126 "type": "rebuild", 00:16:25.126 "target": "spare", 00:16:25.126 "progress": { 00:16:25.126 "blocks": 10240, 00:16:25.126 "percent": 16 00:16:25.126 } 00:16:25.126 }, 00:16:25.126 "base_bdevs_list": [ 00:16:25.126 { 00:16:25.126 "name": "spare", 00:16:25.126 "uuid": "14eb4dfc-254c-54f1-8222-87dbbf0d08e1", 00:16:25.126 "is_configured": true, 00:16:25.126 "data_offset": 2048, 00:16:25.126 "data_size": 63488 00:16:25.126 }, 00:16:25.126 { 00:16:25.126 "name": "BaseBdev2", 00:16:25.126 "uuid": "964e88f2-8039-5126-bedb-ea0bdcc896b8", 00:16:25.126 "is_configured": true, 00:16:25.126 "data_offset": 2048, 00:16:25.126 "data_size": 63488 00:16:25.126 }, 00:16:25.126 { 00:16:25.126 "name": "BaseBdev3", 00:16:25.126 "uuid": "55d9b37c-3963-5d33-8f01-f3ba0828b8f4", 00:16:25.126 "is_configured": true, 00:16:25.126 "data_offset": 2048, 00:16:25.126 "data_size": 63488 00:16:25.126 }, 00:16:25.126 { 00:16:25.126 "name": "BaseBdev4", 00:16:25.126 "uuid": "bc7900a5-86bb-5c0c-b35d-d945bd741b03", 00:16:25.126 "is_configured": true, 00:16:25.126 "data_offset": 2048, 00:16:25.126 "data_size": 63488 00:16:25.126 } 00:16:25.126 ] 00:16:25.126 }' 00:16:25.126 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.126 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:25.126 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.126 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:25.126 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:25.126 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.126 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.126 [2024-11-05 16:30:38.125692] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:25.386 [2024-11-05 16:30:38.238758] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:25.386 [2024-11-05 16:30:38.250436] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.386 [2024-11-05 16:30:38.250600] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:25.386 [2024-11-05 16:30:38.250635] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:25.386 [2024-11-05 16:30:38.295947] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:16:25.386 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.386 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:25.386 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.386 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.386 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:25.386 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:25.386 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:25.386 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.386 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.386 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.386 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.386 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.386 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.386 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.386 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.386 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.386 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.386 "name": "raid_bdev1", 00:16:25.386 "uuid": "ebbb2d8d-6270-49b8-890b-048f9ab2d4b5", 00:16:25.386 "strip_size_kb": 0, 00:16:25.386 "state": "online", 00:16:25.386 "raid_level": "raid1", 00:16:25.386 "superblock": true, 00:16:25.386 "num_base_bdevs": 4, 00:16:25.386 "num_base_bdevs_discovered": 3, 00:16:25.386 "num_base_bdevs_operational": 3, 00:16:25.386 "base_bdevs_list": [ 00:16:25.386 { 00:16:25.386 "name": null, 00:16:25.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.386 "is_configured": false, 00:16:25.386 "data_offset": 0, 00:16:25.386 "data_size": 63488 00:16:25.386 }, 00:16:25.386 { 00:16:25.386 "name": "BaseBdev2", 00:16:25.386 "uuid": "964e88f2-8039-5126-bedb-ea0bdcc896b8", 00:16:25.386 "is_configured": true, 00:16:25.386 "data_offset": 2048, 00:16:25.386 "data_size": 63488 00:16:25.386 }, 00:16:25.386 { 00:16:25.386 "name": "BaseBdev3", 00:16:25.386 "uuid": "55d9b37c-3963-5d33-8f01-f3ba0828b8f4", 00:16:25.386 "is_configured": true, 00:16:25.386 "data_offset": 2048, 00:16:25.386 "data_size": 63488 00:16:25.386 }, 00:16:25.386 { 00:16:25.386 "name": "BaseBdev4", 00:16:25.386 "uuid": "bc7900a5-86bb-5c0c-b35d-d945bd741b03", 00:16:25.386 "is_configured": true, 00:16:25.386 "data_offset": 2048, 00:16:25.386 "data_size": 63488 00:16:25.386 } 00:16:25.386 ] 00:16:25.386 }' 00:16:25.386 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.386 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.903 138.00 IOPS, 414.00 MiB/s [2024-11-05T16:30:38.991Z] 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:25.903 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.903 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:25.903 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:25.903 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.903 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.903 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.903 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.903 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.903 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.903 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.903 "name": "raid_bdev1", 00:16:25.903 "uuid": "ebbb2d8d-6270-49b8-890b-048f9ab2d4b5", 00:16:25.903 "strip_size_kb": 0, 00:16:25.903 "state": "online", 00:16:25.903 "raid_level": "raid1", 00:16:25.903 "superblock": true, 00:16:25.903 "num_base_bdevs": 4, 00:16:25.903 "num_base_bdevs_discovered": 3, 00:16:25.903 "num_base_bdevs_operational": 3, 00:16:25.903 "base_bdevs_list": [ 00:16:25.903 { 00:16:25.903 "name": null, 00:16:25.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.903 "is_configured": false, 00:16:25.903 "data_offset": 0, 00:16:25.903 "data_size": 63488 00:16:25.903 }, 00:16:25.903 { 00:16:25.903 "name": "BaseBdev2", 00:16:25.903 "uuid": "964e88f2-8039-5126-bedb-ea0bdcc896b8", 00:16:25.903 "is_configured": true, 00:16:25.903 "data_offset": 2048, 00:16:25.903 "data_size": 63488 00:16:25.903 }, 00:16:25.903 { 00:16:25.903 "name": "BaseBdev3", 00:16:25.904 "uuid": "55d9b37c-3963-5d33-8f01-f3ba0828b8f4", 00:16:25.904 "is_configured": true, 00:16:25.904 "data_offset": 2048, 00:16:25.904 "data_size": 63488 00:16:25.904 }, 00:16:25.904 { 00:16:25.904 "name": "BaseBdev4", 00:16:25.904 "uuid": "bc7900a5-86bb-5c0c-b35d-d945bd741b03", 00:16:25.904 "is_configured": true, 00:16:25.904 "data_offset": 2048, 00:16:25.904 "data_size": 63488 00:16:25.904 } 00:16:25.904 ] 00:16:25.904 }' 00:16:25.904 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.904 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:25.904 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.904 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:25.904 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:25.904 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.904 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.904 [2024-11-05 16:30:38.918898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:25.904 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.904 16:30:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:25.904 [2024-11-05 16:30:38.978774] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:25.904 [2024-11-05 16:30:38.980848] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:26.162 [2024-11-05 16:30:39.098026] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:26.162 [2024-11-05 16:30:39.098641] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:26.162 [2024-11-05 16:30:39.215979] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:26.162 [2024-11-05 16:30:39.216340] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:26.733 [2024-11-05 16:30:39.571928] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:26.733 [2024-11-05 16:30:39.572561] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:26.998 139.33 IOPS, 418.00 MiB/s [2024-11-05T16:30:40.086Z] [2024-11-05 16:30:39.824926] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:26.998 16:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:26.998 16:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:26.998 16:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:26.998 16:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:26.998 16:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:26.998 16:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.998 16:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.998 16:30:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.998 16:30:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:26.998 16:30:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.998 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:26.998 "name": "raid_bdev1", 00:16:26.998 "uuid": "ebbb2d8d-6270-49b8-890b-048f9ab2d4b5", 00:16:26.998 "strip_size_kb": 0, 00:16:26.998 "state": "online", 00:16:26.998 "raid_level": "raid1", 00:16:26.998 "superblock": true, 00:16:26.998 "num_base_bdevs": 4, 00:16:26.998 "num_base_bdevs_discovered": 4, 00:16:26.998 "num_base_bdevs_operational": 4, 00:16:26.998 "process": { 00:16:26.998 "type": "rebuild", 00:16:26.998 "target": "spare", 00:16:26.998 "progress": { 00:16:26.998 "blocks": 12288, 00:16:26.998 "percent": 19 00:16:26.998 } 00:16:26.998 }, 00:16:26.998 "base_bdevs_list": [ 00:16:26.998 { 00:16:26.998 "name": "spare", 00:16:26.998 "uuid": "14eb4dfc-254c-54f1-8222-87dbbf0d08e1", 00:16:26.998 "is_configured": true, 00:16:26.998 "data_offset": 2048, 00:16:26.998 "data_size": 63488 00:16:26.998 }, 00:16:26.998 { 00:16:26.998 "name": "BaseBdev2", 00:16:26.998 "uuid": "964e88f2-8039-5126-bedb-ea0bdcc896b8", 00:16:26.998 "is_configured": true, 00:16:26.998 "data_offset": 2048, 00:16:26.998 "data_size": 63488 00:16:26.998 }, 00:16:26.998 { 00:16:26.998 "name": "BaseBdev3", 00:16:26.998 "uuid": "55d9b37c-3963-5d33-8f01-f3ba0828b8f4", 00:16:26.998 "is_configured": true, 00:16:26.998 "data_offset": 2048, 00:16:26.998 "data_size": 63488 00:16:26.998 }, 00:16:26.998 { 00:16:26.998 "name": "BaseBdev4", 00:16:26.998 "uuid": "bc7900a5-86bb-5c0c-b35d-d945bd741b03", 00:16:26.998 "is_configured": true, 00:16:26.998 "data_offset": 2048, 00:16:26.998 "data_size": 63488 00:16:26.998 } 00:16:26.998 ] 00:16:26.998 }' 00:16:26.998 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:26.998 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:26.998 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:26.998 [2024-11-05 16:30:40.079832] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:27.267 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:27.268 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:27.268 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:27.268 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:27.268 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:27.268 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:27.268 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:27.268 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:27.268 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.268 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.268 [2024-11-05 16:30:40.115057] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:27.268 [2024-11-05 16:30:40.197636] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:27.527 [2024-11-05 16:30:40.400049] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:16:27.527 [2024-11-05 16:30:40.400098] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:16:27.527 [2024-11-05 16:30:40.401829] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:27.527 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.527 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:27.527 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:27.527 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:27.527 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.527 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:27.527 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:27.527 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.527 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.527 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.527 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.527 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.527 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.527 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.527 "name": "raid_bdev1", 00:16:27.527 "uuid": "ebbb2d8d-6270-49b8-890b-048f9ab2d4b5", 00:16:27.527 "strip_size_kb": 0, 00:16:27.527 "state": "online", 00:16:27.527 "raid_level": "raid1", 00:16:27.527 "superblock": true, 00:16:27.527 "num_base_bdevs": 4, 00:16:27.527 "num_base_bdevs_discovered": 3, 00:16:27.527 "num_base_bdevs_operational": 3, 00:16:27.527 "process": { 00:16:27.527 "type": "rebuild", 00:16:27.527 "target": "spare", 00:16:27.527 "progress": { 00:16:27.527 "blocks": 16384, 00:16:27.527 "percent": 25 00:16:27.527 } 00:16:27.527 }, 00:16:27.527 "base_bdevs_list": [ 00:16:27.527 { 00:16:27.527 "name": "spare", 00:16:27.527 "uuid": "14eb4dfc-254c-54f1-8222-87dbbf0d08e1", 00:16:27.527 "is_configured": true, 00:16:27.527 "data_offset": 2048, 00:16:27.527 "data_size": 63488 00:16:27.527 }, 00:16:27.527 { 00:16:27.527 "name": null, 00:16:27.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.527 "is_configured": false, 00:16:27.527 "data_offset": 0, 00:16:27.527 "data_size": 63488 00:16:27.527 }, 00:16:27.527 { 00:16:27.527 "name": "BaseBdev3", 00:16:27.527 "uuid": "55d9b37c-3963-5d33-8f01-f3ba0828b8f4", 00:16:27.527 "is_configured": true, 00:16:27.527 "data_offset": 2048, 00:16:27.527 "data_size": 63488 00:16:27.527 }, 00:16:27.527 { 00:16:27.527 "name": "BaseBdev4", 00:16:27.527 "uuid": "bc7900a5-86bb-5c0c-b35d-d945bd741b03", 00:16:27.527 "is_configured": true, 00:16:27.527 "data_offset": 2048, 00:16:27.527 "data_size": 63488 00:16:27.527 } 00:16:27.527 ] 00:16:27.527 }' 00:16:27.527 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.527 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:27.527 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.528 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:27.528 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=512 00:16:27.528 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:27.528 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:27.528 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.528 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:27.528 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:27.528 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.528 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.528 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.528 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.528 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.528 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.528 125.25 IOPS, 375.75 MiB/s [2024-11-05T16:30:40.616Z] 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.528 "name": "raid_bdev1", 00:16:27.528 "uuid": "ebbb2d8d-6270-49b8-890b-048f9ab2d4b5", 00:16:27.528 "strip_size_kb": 0, 00:16:27.528 "state": "online", 00:16:27.528 "raid_level": "raid1", 00:16:27.528 "superblock": true, 00:16:27.528 "num_base_bdevs": 4, 00:16:27.528 "num_base_bdevs_discovered": 3, 00:16:27.528 "num_base_bdevs_operational": 3, 00:16:27.528 "process": { 00:16:27.528 "type": "rebuild", 00:16:27.528 "target": "spare", 00:16:27.528 "progress": { 00:16:27.528 "blocks": 18432, 00:16:27.528 "percent": 29 00:16:27.528 } 00:16:27.528 }, 00:16:27.528 "base_bdevs_list": [ 00:16:27.528 { 00:16:27.528 "name": "spare", 00:16:27.528 "uuid": "14eb4dfc-254c-54f1-8222-87dbbf0d08e1", 00:16:27.528 "is_configured": true, 00:16:27.528 "data_offset": 2048, 00:16:27.528 "data_size": 63488 00:16:27.528 }, 00:16:27.528 { 00:16:27.528 "name": null, 00:16:27.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.528 "is_configured": false, 00:16:27.528 "data_offset": 0, 00:16:27.528 "data_size": 63488 00:16:27.528 }, 00:16:27.528 { 00:16:27.528 "name": "BaseBdev3", 00:16:27.528 "uuid": "55d9b37c-3963-5d33-8f01-f3ba0828b8f4", 00:16:27.528 "is_configured": true, 00:16:27.528 "data_offset": 2048, 00:16:27.528 "data_size": 63488 00:16:27.528 }, 00:16:27.528 { 00:16:27.528 "name": "BaseBdev4", 00:16:27.528 "uuid": "bc7900a5-86bb-5c0c-b35d-d945bd741b03", 00:16:27.528 "is_configured": true, 00:16:27.528 "data_offset": 2048, 00:16:27.528 "data_size": 63488 00:16:27.528 } 00:16:27.528 ] 00:16:27.528 }' 00:16:27.528 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.787 [2024-11-05 16:30:40.646604] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:16:27.787 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:27.787 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.787 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:27.787 16:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:28.045 [2024-11-05 16:30:41.111092] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:16:28.872 112.20 IOPS, 336.60 MiB/s [2024-11-05T16:30:41.960Z] 16:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:28.872 16:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:28.872 16:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.872 16:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:28.872 16:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:28.872 16:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.872 16:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.872 16:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.872 16:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.872 16:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:28.872 16:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.872 16:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.872 "name": "raid_bdev1", 00:16:28.872 "uuid": "ebbb2d8d-6270-49b8-890b-048f9ab2d4b5", 00:16:28.872 "strip_size_kb": 0, 00:16:28.872 "state": "online", 00:16:28.872 "raid_level": "raid1", 00:16:28.872 "superblock": true, 00:16:28.872 "num_base_bdevs": 4, 00:16:28.872 "num_base_bdevs_discovered": 3, 00:16:28.872 "num_base_bdevs_operational": 3, 00:16:28.872 "process": { 00:16:28.872 "type": "rebuild", 00:16:28.872 "target": "spare", 00:16:28.872 "progress": { 00:16:28.872 "blocks": 36864, 00:16:28.872 "percent": 58 00:16:28.872 } 00:16:28.872 }, 00:16:28.872 "base_bdevs_list": [ 00:16:28.872 { 00:16:28.872 "name": "spare", 00:16:28.872 "uuid": "14eb4dfc-254c-54f1-8222-87dbbf0d08e1", 00:16:28.872 "is_configured": true, 00:16:28.872 "data_offset": 2048, 00:16:28.872 "data_size": 63488 00:16:28.872 }, 00:16:28.872 { 00:16:28.872 "name": null, 00:16:28.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.872 "is_configured": false, 00:16:28.872 "data_offset": 0, 00:16:28.872 "data_size": 63488 00:16:28.872 }, 00:16:28.872 { 00:16:28.872 "name": "BaseBdev3", 00:16:28.872 "uuid": "55d9b37c-3963-5d33-8f01-f3ba0828b8f4", 00:16:28.872 "is_configured": true, 00:16:28.872 "data_offset": 2048, 00:16:28.872 "data_size": 63488 00:16:28.872 }, 00:16:28.872 { 00:16:28.872 "name": "BaseBdev4", 00:16:28.872 "uuid": "bc7900a5-86bb-5c0c-b35d-d945bd741b03", 00:16:28.872 "is_configured": true, 00:16:28.872 "data_offset": 2048, 00:16:28.872 "data_size": 63488 00:16:28.872 } 00:16:28.872 ] 00:16:28.872 }' 00:16:28.872 16:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.872 [2024-11-05 16:30:41.809361] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:16:28.872 16:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:28.872 16:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.872 16:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:28.872 16:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:29.130 [2024-11-05 16:30:42.150860] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:16:29.390 [2024-11-05 16:30:42.369979] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:16:29.390 [2024-11-05 16:30:42.370682] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:16:29.907 99.83 IOPS, 299.50 MiB/s [2024-11-05T16:30:42.995Z] [2024-11-05 16:30:42.789416] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:16:29.907 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:29.907 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:29.907 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.907 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:29.907 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:29.907 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.907 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.907 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.907 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.907 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:29.907 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.907 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.907 "name": "raid_bdev1", 00:16:29.907 "uuid": "ebbb2d8d-6270-49b8-890b-048f9ab2d4b5", 00:16:29.907 "strip_size_kb": 0, 00:16:29.907 "state": "online", 00:16:29.907 "raid_level": "raid1", 00:16:29.907 "superblock": true, 00:16:29.907 "num_base_bdevs": 4, 00:16:29.907 "num_base_bdevs_discovered": 3, 00:16:29.907 "num_base_bdevs_operational": 3, 00:16:29.907 "process": { 00:16:29.907 "type": "rebuild", 00:16:29.907 "target": "spare", 00:16:29.907 "progress": { 00:16:29.907 "blocks": 53248, 00:16:29.907 "percent": 83 00:16:29.907 } 00:16:29.907 }, 00:16:29.907 "base_bdevs_list": [ 00:16:29.907 { 00:16:29.907 "name": "spare", 00:16:29.907 "uuid": "14eb4dfc-254c-54f1-8222-87dbbf0d08e1", 00:16:29.907 "is_configured": true, 00:16:29.907 "data_offset": 2048, 00:16:29.907 "data_size": 63488 00:16:29.907 }, 00:16:29.907 { 00:16:29.907 "name": null, 00:16:29.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.907 "is_configured": false, 00:16:29.907 "data_offset": 0, 00:16:29.907 "data_size": 63488 00:16:29.907 }, 00:16:29.907 { 00:16:29.907 "name": "BaseBdev3", 00:16:29.907 "uuid": "55d9b37c-3963-5d33-8f01-f3ba0828b8f4", 00:16:29.907 "is_configured": true, 00:16:29.907 "data_offset": 2048, 00:16:29.907 "data_size": 63488 00:16:29.907 }, 00:16:29.907 { 00:16:29.907 "name": "BaseBdev4", 00:16:29.907 "uuid": "bc7900a5-86bb-5c0c-b35d-d945bd741b03", 00:16:29.907 "is_configured": true, 00:16:29.907 "data_offset": 2048, 00:16:29.907 "data_size": 63488 00:16:29.907 } 00:16:29.907 ] 00:16:29.907 }' 00:16:29.907 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.907 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:29.907 16:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.167 16:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:30.167 16:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:30.167 [2024-11-05 16:30:43.021593] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:16:30.426 [2024-11-05 16:30:43.426525] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:30.426 [2024-11-05 16:30:43.458949] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:30.426 [2024-11-05 16:30:43.461798] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:30.945 90.29 IOPS, 270.86 MiB/s [2024-11-05T16:30:44.033Z] 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:30.945 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:30.945 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:30.945 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:30.945 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:30.945 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:30.945 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.945 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.945 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.945 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:31.204 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.204 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.204 "name": "raid_bdev1", 00:16:31.204 "uuid": "ebbb2d8d-6270-49b8-890b-048f9ab2d4b5", 00:16:31.204 "strip_size_kb": 0, 00:16:31.204 "state": "online", 00:16:31.204 "raid_level": "raid1", 00:16:31.204 "superblock": true, 00:16:31.204 "num_base_bdevs": 4, 00:16:31.204 "num_base_bdevs_discovered": 3, 00:16:31.204 "num_base_bdevs_operational": 3, 00:16:31.204 "base_bdevs_list": [ 00:16:31.204 { 00:16:31.204 "name": "spare", 00:16:31.204 "uuid": "14eb4dfc-254c-54f1-8222-87dbbf0d08e1", 00:16:31.204 "is_configured": true, 00:16:31.204 "data_offset": 2048, 00:16:31.204 "data_size": 63488 00:16:31.204 }, 00:16:31.204 { 00:16:31.204 "name": null, 00:16:31.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.204 "is_configured": false, 00:16:31.204 "data_offset": 0, 00:16:31.204 "data_size": 63488 00:16:31.204 }, 00:16:31.204 { 00:16:31.204 "name": "BaseBdev3", 00:16:31.204 "uuid": "55d9b37c-3963-5d33-8f01-f3ba0828b8f4", 00:16:31.204 "is_configured": true, 00:16:31.204 "data_offset": 2048, 00:16:31.204 "data_size": 63488 00:16:31.204 }, 00:16:31.204 { 00:16:31.204 "name": "BaseBdev4", 00:16:31.204 "uuid": "bc7900a5-86bb-5c0c-b35d-d945bd741b03", 00:16:31.204 "is_configured": true, 00:16:31.204 "data_offset": 2048, 00:16:31.204 "data_size": 63488 00:16:31.204 } 00:16:31.204 ] 00:16:31.204 }' 00:16:31.204 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.204 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:31.204 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.204 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:31.204 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:16:31.204 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:31.204 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.204 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:31.204 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:31.204 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.205 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.205 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.205 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:31.205 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.205 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.205 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.205 "name": "raid_bdev1", 00:16:31.205 "uuid": "ebbb2d8d-6270-49b8-890b-048f9ab2d4b5", 00:16:31.205 "strip_size_kb": 0, 00:16:31.205 "state": "online", 00:16:31.205 "raid_level": "raid1", 00:16:31.205 "superblock": true, 00:16:31.205 "num_base_bdevs": 4, 00:16:31.205 "num_base_bdevs_discovered": 3, 00:16:31.205 "num_base_bdevs_operational": 3, 00:16:31.205 "base_bdevs_list": [ 00:16:31.205 { 00:16:31.205 "name": "spare", 00:16:31.205 "uuid": "14eb4dfc-254c-54f1-8222-87dbbf0d08e1", 00:16:31.205 "is_configured": true, 00:16:31.205 "data_offset": 2048, 00:16:31.205 "data_size": 63488 00:16:31.205 }, 00:16:31.205 { 00:16:31.205 "name": null, 00:16:31.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.205 "is_configured": false, 00:16:31.205 "data_offset": 0, 00:16:31.205 "data_size": 63488 00:16:31.205 }, 00:16:31.205 { 00:16:31.205 "name": "BaseBdev3", 00:16:31.205 "uuid": "55d9b37c-3963-5d33-8f01-f3ba0828b8f4", 00:16:31.205 "is_configured": true, 00:16:31.205 "data_offset": 2048, 00:16:31.205 "data_size": 63488 00:16:31.205 }, 00:16:31.205 { 00:16:31.205 "name": "BaseBdev4", 00:16:31.205 "uuid": "bc7900a5-86bb-5c0c-b35d-d945bd741b03", 00:16:31.205 "is_configured": true, 00:16:31.205 "data_offset": 2048, 00:16:31.205 "data_size": 63488 00:16:31.205 } 00:16:31.205 ] 00:16:31.205 }' 00:16:31.205 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.205 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:31.205 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.464 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:31.464 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:31.464 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.464 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.464 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:31.464 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:31.464 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:31.464 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.464 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.464 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.464 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.464 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.464 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.464 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.464 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:31.464 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.464 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.464 "name": "raid_bdev1", 00:16:31.464 "uuid": "ebbb2d8d-6270-49b8-890b-048f9ab2d4b5", 00:16:31.464 "strip_size_kb": 0, 00:16:31.464 "state": "online", 00:16:31.464 "raid_level": "raid1", 00:16:31.464 "superblock": true, 00:16:31.464 "num_base_bdevs": 4, 00:16:31.464 "num_base_bdevs_discovered": 3, 00:16:31.464 "num_base_bdevs_operational": 3, 00:16:31.464 "base_bdevs_list": [ 00:16:31.464 { 00:16:31.464 "name": "spare", 00:16:31.464 "uuid": "14eb4dfc-254c-54f1-8222-87dbbf0d08e1", 00:16:31.464 "is_configured": true, 00:16:31.464 "data_offset": 2048, 00:16:31.464 "data_size": 63488 00:16:31.464 }, 00:16:31.464 { 00:16:31.464 "name": null, 00:16:31.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.464 "is_configured": false, 00:16:31.464 "data_offset": 0, 00:16:31.464 "data_size": 63488 00:16:31.464 }, 00:16:31.464 { 00:16:31.464 "name": "BaseBdev3", 00:16:31.464 "uuid": "55d9b37c-3963-5d33-8f01-f3ba0828b8f4", 00:16:31.464 "is_configured": true, 00:16:31.464 "data_offset": 2048, 00:16:31.464 "data_size": 63488 00:16:31.464 }, 00:16:31.464 { 00:16:31.464 "name": "BaseBdev4", 00:16:31.465 "uuid": "bc7900a5-86bb-5c0c-b35d-d945bd741b03", 00:16:31.465 "is_configured": true, 00:16:31.465 "data_offset": 2048, 00:16:31.465 "data_size": 63488 00:16:31.465 } 00:16:31.465 ] 00:16:31.465 }' 00:16:31.465 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.465 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:31.724 83.88 IOPS, 251.62 MiB/s [2024-11-05T16:30:44.812Z] 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:31.724 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.724 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:31.724 [2024-11-05 16:30:44.740982] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:31.724 [2024-11-05 16:30:44.741018] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:31.724 00:16:31.724 Latency(us) 00:16:31.724 [2024-11-05T16:30:44.812Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.724 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:31.724 raid_bdev1 : 8.20 83.14 249.41 0.00 0.00 16286.44 341.63 113557.58 00:16:31.724 [2024-11-05T16:30:44.812Z] =================================================================================================================== 00:16:31.724 [2024-11-05T16:30:44.812Z] Total : 83.14 249.41 0.00 0.00 16286.44 341.63 113557.58 00:16:31.983 [2024-11-05 16:30:44.815145] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.983 [2024-11-05 16:30:44.815201] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:31.983 [2024-11-05 16:30:44.815318] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:31.983 [2024-11-05 16:30:44.815330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:31.983 { 00:16:31.983 "results": [ 00:16:31.983 { 00:16:31.983 "job": "raid_bdev1", 00:16:31.983 "core_mask": "0x1", 00:16:31.983 "workload": "randrw", 00:16:31.983 "percentage": 50, 00:16:31.983 "status": "finished", 00:16:31.983 "queue_depth": 2, 00:16:31.983 "io_size": 3145728, 00:16:31.983 "runtime": 8.203394, 00:16:31.983 "iops": 83.13632138112591, 00:16:31.983 "mibps": 249.40896414337772, 00:16:31.983 "io_failed": 0, 00:16:31.983 "io_timeout": 0, 00:16:31.983 "avg_latency_us": 16286.44009783708, 00:16:31.983 "min_latency_us": 341.63144104803496, 00:16:31.983 "max_latency_us": 113557.57554585153 00:16:31.983 } 00:16:31.983 ], 00:16:31.983 "core_count": 1 00:16:31.983 } 00:16:31.983 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.983 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.983 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:31.983 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.983 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:31.983 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.983 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:31.983 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:31.983 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:31.983 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:31.983 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:31.983 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:31.983 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:31.983 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:31.983 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:31.983 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:31.983 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:31.983 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:31.983 16:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:32.243 /dev/nbd0 00:16:32.243 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:32.243 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:32.243 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:32.243 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:16:32.243 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:32.243 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:32.243 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:32.243 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:16:32.243 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:32.243 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:32.243 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:32.243 1+0 records in 00:16:32.243 1+0 records out 00:16:32.243 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287981 s, 14.2 MB/s 00:16:32.243 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:32.243 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:16:32.243 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:32.243 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:32.243 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:16:32.243 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:32.243 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:32.243 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:32.243 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:16:32.243 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:16:32.243 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:32.243 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:16:32.243 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:16:32.243 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:32.243 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:16:32.243 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:32.243 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:32.243 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:32.243 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:32.243 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:32.243 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:32.243 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:16:32.502 /dev/nbd1 00:16:32.502 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:32.502 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:32.502 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:16:32.502 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:16:32.502 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:32.502 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:32.502 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:16:32.502 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:16:32.502 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:32.502 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:32.502 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:32.502 1+0 records in 00:16:32.502 1+0 records out 00:16:32.502 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398383 s, 10.3 MB/s 00:16:32.502 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:32.502 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:16:32.502 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:32.502 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:32.502 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:16:32.502 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:32.502 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:32.502 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:32.761 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:32.761 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:32.761 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:32.761 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:32.761 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:32.761 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:32.761 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:33.019 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:33.019 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:33.019 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:33.019 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:33.019 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:33.019 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:33.019 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:33.019 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:33.019 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:33.019 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:16:33.019 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:16:33.019 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:33.019 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:16:33.019 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:33.019 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:33.019 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:33.019 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:33.019 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:33.019 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:33.020 16:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:16:33.279 /dev/nbd1 00:16:33.279 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:33.279 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:33.279 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:16:33.279 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:16:33.279 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:33.279 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:33.279 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:16:33.279 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:16:33.279 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:33.279 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:33.279 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:33.279 1+0 records in 00:16:33.279 1+0 records out 00:16:33.279 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000601042 s, 6.8 MB/s 00:16:33.279 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:33.279 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:16:33.279 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:33.279 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:33.279 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:16:33.279 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:33.279 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:33.279 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:33.279 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:33.279 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:33.279 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:33.279 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:33.279 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:33.279 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:33.279 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:33.538 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:33.538 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:33.538 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:33.538 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:33.538 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:33.538 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:33.538 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:33.538 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:33.538 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:33.538 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:33.538 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:33.538 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:33.538 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:33.538 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:33.538 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:33.797 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:33.797 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:33.797 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:33.797 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:33.797 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:33.797 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:33.797 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:33.797 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:33.797 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:33.797 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:33.797 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.797 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:33.797 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.797 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:33.797 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.797 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:33.797 [2024-11-05 16:30:46.802042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:33.797 [2024-11-05 16:30:46.802110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.797 [2024-11-05 16:30:46.802153] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:33.797 [2024-11-05 16:30:46.802165] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.797 [2024-11-05 16:30:46.804827] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.797 [2024-11-05 16:30:46.804871] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:33.797 [2024-11-05 16:30:46.804983] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:33.797 [2024-11-05 16:30:46.805041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:33.797 [2024-11-05 16:30:46.805208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:33.797 [2024-11-05 16:30:46.805322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:33.797 spare 00:16:33.797 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.797 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:33.797 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.797 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:34.056 [2024-11-05 16:30:46.905267] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:34.056 [2024-11-05 16:30:46.905324] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:34.056 [2024-11-05 16:30:46.905755] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:16:34.056 [2024-11-05 16:30:46.906015] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:34.056 [2024-11-05 16:30:46.906037] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:34.056 [2024-11-05 16:30:46.906286] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:34.056 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.056 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:34.056 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:34.056 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:34.056 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:34.056 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:34.056 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:34.056 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.056 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.056 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.056 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.056 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.056 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.056 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.056 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:34.056 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.057 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.057 "name": "raid_bdev1", 00:16:34.057 "uuid": "ebbb2d8d-6270-49b8-890b-048f9ab2d4b5", 00:16:34.057 "strip_size_kb": 0, 00:16:34.057 "state": "online", 00:16:34.057 "raid_level": "raid1", 00:16:34.057 "superblock": true, 00:16:34.057 "num_base_bdevs": 4, 00:16:34.057 "num_base_bdevs_discovered": 3, 00:16:34.057 "num_base_bdevs_operational": 3, 00:16:34.057 "base_bdevs_list": [ 00:16:34.057 { 00:16:34.057 "name": "spare", 00:16:34.057 "uuid": "14eb4dfc-254c-54f1-8222-87dbbf0d08e1", 00:16:34.057 "is_configured": true, 00:16:34.057 "data_offset": 2048, 00:16:34.057 "data_size": 63488 00:16:34.057 }, 00:16:34.057 { 00:16:34.057 "name": null, 00:16:34.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.057 "is_configured": false, 00:16:34.057 "data_offset": 2048, 00:16:34.057 "data_size": 63488 00:16:34.057 }, 00:16:34.057 { 00:16:34.057 "name": "BaseBdev3", 00:16:34.057 "uuid": "55d9b37c-3963-5d33-8f01-f3ba0828b8f4", 00:16:34.057 "is_configured": true, 00:16:34.057 "data_offset": 2048, 00:16:34.057 "data_size": 63488 00:16:34.057 }, 00:16:34.057 { 00:16:34.057 "name": "BaseBdev4", 00:16:34.057 "uuid": "bc7900a5-86bb-5c0c-b35d-d945bd741b03", 00:16:34.057 "is_configured": true, 00:16:34.057 "data_offset": 2048, 00:16:34.057 "data_size": 63488 00:16:34.057 } 00:16:34.057 ] 00:16:34.057 }' 00:16:34.057 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.057 16:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:34.316 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:34.316 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:34.316 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:34.316 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:34.316 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:34.316 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.316 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.316 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.316 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:34.316 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.575 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:34.575 "name": "raid_bdev1", 00:16:34.575 "uuid": "ebbb2d8d-6270-49b8-890b-048f9ab2d4b5", 00:16:34.575 "strip_size_kb": 0, 00:16:34.575 "state": "online", 00:16:34.575 "raid_level": "raid1", 00:16:34.575 "superblock": true, 00:16:34.575 "num_base_bdevs": 4, 00:16:34.575 "num_base_bdevs_discovered": 3, 00:16:34.575 "num_base_bdevs_operational": 3, 00:16:34.575 "base_bdevs_list": [ 00:16:34.575 { 00:16:34.575 "name": "spare", 00:16:34.575 "uuid": "14eb4dfc-254c-54f1-8222-87dbbf0d08e1", 00:16:34.575 "is_configured": true, 00:16:34.575 "data_offset": 2048, 00:16:34.575 "data_size": 63488 00:16:34.575 }, 00:16:34.575 { 00:16:34.575 "name": null, 00:16:34.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.575 "is_configured": false, 00:16:34.575 "data_offset": 2048, 00:16:34.575 "data_size": 63488 00:16:34.575 }, 00:16:34.575 { 00:16:34.575 "name": "BaseBdev3", 00:16:34.575 "uuid": "55d9b37c-3963-5d33-8f01-f3ba0828b8f4", 00:16:34.575 "is_configured": true, 00:16:34.575 "data_offset": 2048, 00:16:34.575 "data_size": 63488 00:16:34.575 }, 00:16:34.575 { 00:16:34.575 "name": "BaseBdev4", 00:16:34.575 "uuid": "bc7900a5-86bb-5c0c-b35d-d945bd741b03", 00:16:34.575 "is_configured": true, 00:16:34.575 "data_offset": 2048, 00:16:34.575 "data_size": 63488 00:16:34.575 } 00:16:34.575 ] 00:16:34.575 }' 00:16:34.575 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:34.575 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:34.575 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:34.575 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:34.575 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:34.575 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.575 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.575 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:34.575 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.575 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:34.575 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:34.575 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.575 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:34.575 [2024-11-05 16:30:47.561285] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:34.575 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.575 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:34.575 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:34.575 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:34.575 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:34.575 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:34.575 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:34.575 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.575 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.575 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.575 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.575 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.575 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.575 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.575 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:34.575 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.575 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.575 "name": "raid_bdev1", 00:16:34.575 "uuid": "ebbb2d8d-6270-49b8-890b-048f9ab2d4b5", 00:16:34.575 "strip_size_kb": 0, 00:16:34.575 "state": "online", 00:16:34.575 "raid_level": "raid1", 00:16:34.575 "superblock": true, 00:16:34.575 "num_base_bdevs": 4, 00:16:34.575 "num_base_bdevs_discovered": 2, 00:16:34.575 "num_base_bdevs_operational": 2, 00:16:34.575 "base_bdevs_list": [ 00:16:34.575 { 00:16:34.575 "name": null, 00:16:34.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.575 "is_configured": false, 00:16:34.575 "data_offset": 0, 00:16:34.576 "data_size": 63488 00:16:34.576 }, 00:16:34.576 { 00:16:34.576 "name": null, 00:16:34.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.576 "is_configured": false, 00:16:34.576 "data_offset": 2048, 00:16:34.576 "data_size": 63488 00:16:34.576 }, 00:16:34.576 { 00:16:34.576 "name": "BaseBdev3", 00:16:34.576 "uuid": "55d9b37c-3963-5d33-8f01-f3ba0828b8f4", 00:16:34.576 "is_configured": true, 00:16:34.576 "data_offset": 2048, 00:16:34.576 "data_size": 63488 00:16:34.576 }, 00:16:34.576 { 00:16:34.576 "name": "BaseBdev4", 00:16:34.576 "uuid": "bc7900a5-86bb-5c0c-b35d-d945bd741b03", 00:16:34.576 "is_configured": true, 00:16:34.576 "data_offset": 2048, 00:16:34.576 "data_size": 63488 00:16:34.576 } 00:16:34.576 ] 00:16:34.576 }' 00:16:34.576 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.576 16:30:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.151 16:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:35.151 16:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.151 16:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.151 [2024-11-05 16:30:48.076633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:35.151 [2024-11-05 16:30:48.076923] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:35.151 [2024-11-05 16:30:48.076999] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:35.151 [2024-11-05 16:30:48.077078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:35.151 [2024-11-05 16:30:48.095509] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:16:35.151 16:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.151 16:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:35.151 [2024-11-05 16:30:48.097799] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:36.087 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:36.087 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.087 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:36.087 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:36.088 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.088 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.088 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.088 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:36.088 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.088 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.088 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.088 "name": "raid_bdev1", 00:16:36.088 "uuid": "ebbb2d8d-6270-49b8-890b-048f9ab2d4b5", 00:16:36.088 "strip_size_kb": 0, 00:16:36.088 "state": "online", 00:16:36.088 "raid_level": "raid1", 00:16:36.088 "superblock": true, 00:16:36.088 "num_base_bdevs": 4, 00:16:36.088 "num_base_bdevs_discovered": 3, 00:16:36.088 "num_base_bdevs_operational": 3, 00:16:36.088 "process": { 00:16:36.088 "type": "rebuild", 00:16:36.088 "target": "spare", 00:16:36.088 "progress": { 00:16:36.088 "blocks": 20480, 00:16:36.088 "percent": 32 00:16:36.088 } 00:16:36.088 }, 00:16:36.088 "base_bdevs_list": [ 00:16:36.088 { 00:16:36.088 "name": "spare", 00:16:36.088 "uuid": "14eb4dfc-254c-54f1-8222-87dbbf0d08e1", 00:16:36.088 "is_configured": true, 00:16:36.088 "data_offset": 2048, 00:16:36.088 "data_size": 63488 00:16:36.088 }, 00:16:36.088 { 00:16:36.088 "name": null, 00:16:36.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.088 "is_configured": false, 00:16:36.088 "data_offset": 2048, 00:16:36.088 "data_size": 63488 00:16:36.088 }, 00:16:36.088 { 00:16:36.088 "name": "BaseBdev3", 00:16:36.088 "uuid": "55d9b37c-3963-5d33-8f01-f3ba0828b8f4", 00:16:36.088 "is_configured": true, 00:16:36.088 "data_offset": 2048, 00:16:36.088 "data_size": 63488 00:16:36.088 }, 00:16:36.088 { 00:16:36.088 "name": "BaseBdev4", 00:16:36.088 "uuid": "bc7900a5-86bb-5c0c-b35d-d945bd741b03", 00:16:36.088 "is_configured": true, 00:16:36.088 "data_offset": 2048, 00:16:36.088 "data_size": 63488 00:16:36.088 } 00:16:36.088 ] 00:16:36.088 }' 00:16:36.088 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:36.347 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:36.347 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:36.347 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:36.347 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:36.347 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.347 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:36.347 [2024-11-05 16:30:49.241001] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:36.347 [2024-11-05 16:30:49.303876] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:36.347 [2024-11-05 16:30:49.304082] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:36.347 [2024-11-05 16:30:49.304113] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:36.348 [2024-11-05 16:30:49.304123] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:36.348 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.348 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:36.348 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:36.348 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.348 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:36.348 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:36.348 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:36.348 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.348 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.348 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.348 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.348 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.348 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.348 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:36.348 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.348 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.348 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.348 "name": "raid_bdev1", 00:16:36.348 "uuid": "ebbb2d8d-6270-49b8-890b-048f9ab2d4b5", 00:16:36.348 "strip_size_kb": 0, 00:16:36.348 "state": "online", 00:16:36.348 "raid_level": "raid1", 00:16:36.348 "superblock": true, 00:16:36.348 "num_base_bdevs": 4, 00:16:36.348 "num_base_bdevs_discovered": 2, 00:16:36.348 "num_base_bdevs_operational": 2, 00:16:36.348 "base_bdevs_list": [ 00:16:36.348 { 00:16:36.348 "name": null, 00:16:36.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.348 "is_configured": false, 00:16:36.348 "data_offset": 0, 00:16:36.348 "data_size": 63488 00:16:36.348 }, 00:16:36.348 { 00:16:36.348 "name": null, 00:16:36.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.348 "is_configured": false, 00:16:36.348 "data_offset": 2048, 00:16:36.348 "data_size": 63488 00:16:36.348 }, 00:16:36.348 { 00:16:36.348 "name": "BaseBdev3", 00:16:36.348 "uuid": "55d9b37c-3963-5d33-8f01-f3ba0828b8f4", 00:16:36.348 "is_configured": true, 00:16:36.348 "data_offset": 2048, 00:16:36.348 "data_size": 63488 00:16:36.348 }, 00:16:36.348 { 00:16:36.348 "name": "BaseBdev4", 00:16:36.348 "uuid": "bc7900a5-86bb-5c0c-b35d-d945bd741b03", 00:16:36.348 "is_configured": true, 00:16:36.348 "data_offset": 2048, 00:16:36.348 "data_size": 63488 00:16:36.348 } 00:16:36.348 ] 00:16:36.348 }' 00:16:36.348 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.348 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:36.919 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:36.919 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.919 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:36.919 [2024-11-05 16:30:49.791747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:36.919 [2024-11-05 16:30:49.791890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.919 [2024-11-05 16:30:49.791926] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:36.919 [2024-11-05 16:30:49.791937] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.919 [2024-11-05 16:30:49.792469] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.919 [2024-11-05 16:30:49.792491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:36.919 [2024-11-05 16:30:49.792615] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:36.919 [2024-11-05 16:30:49.792631] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:36.919 [2024-11-05 16:30:49.792644] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:36.919 [2024-11-05 16:30:49.792667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:36.919 spare 00:16:36.919 [2024-11-05 16:30:49.809360] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:16:36.919 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.919 16:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:36.919 [2024-11-05 16:30:49.811373] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:37.860 16:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:37.860 16:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.860 16:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:37.860 16:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:37.861 16:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.861 16:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.861 16:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.861 16:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.861 16:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:37.861 16:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.861 16:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.861 "name": "raid_bdev1", 00:16:37.861 "uuid": "ebbb2d8d-6270-49b8-890b-048f9ab2d4b5", 00:16:37.861 "strip_size_kb": 0, 00:16:37.861 "state": "online", 00:16:37.861 "raid_level": "raid1", 00:16:37.861 "superblock": true, 00:16:37.861 "num_base_bdevs": 4, 00:16:37.861 "num_base_bdevs_discovered": 3, 00:16:37.861 "num_base_bdevs_operational": 3, 00:16:37.861 "process": { 00:16:37.861 "type": "rebuild", 00:16:37.861 "target": "spare", 00:16:37.861 "progress": { 00:16:37.861 "blocks": 20480, 00:16:37.861 "percent": 32 00:16:37.861 } 00:16:37.861 }, 00:16:37.861 "base_bdevs_list": [ 00:16:37.861 { 00:16:37.861 "name": "spare", 00:16:37.861 "uuid": "14eb4dfc-254c-54f1-8222-87dbbf0d08e1", 00:16:37.861 "is_configured": true, 00:16:37.861 "data_offset": 2048, 00:16:37.861 "data_size": 63488 00:16:37.861 }, 00:16:37.861 { 00:16:37.861 "name": null, 00:16:37.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.861 "is_configured": false, 00:16:37.861 "data_offset": 2048, 00:16:37.861 "data_size": 63488 00:16:37.861 }, 00:16:37.861 { 00:16:37.861 "name": "BaseBdev3", 00:16:37.861 "uuid": "55d9b37c-3963-5d33-8f01-f3ba0828b8f4", 00:16:37.861 "is_configured": true, 00:16:37.861 "data_offset": 2048, 00:16:37.861 "data_size": 63488 00:16:37.861 }, 00:16:37.861 { 00:16:37.861 "name": "BaseBdev4", 00:16:37.861 "uuid": "bc7900a5-86bb-5c0c-b35d-d945bd741b03", 00:16:37.861 "is_configured": true, 00:16:37.861 "data_offset": 2048, 00:16:37.861 "data_size": 63488 00:16:37.861 } 00:16:37.861 ] 00:16:37.861 }' 00:16:37.861 16:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.861 16:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:37.861 16:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.121 16:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:38.121 16:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:38.121 16:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.121 16:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:38.121 [2024-11-05 16:30:50.962887] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:38.121 [2024-11-05 16:30:51.017939] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:38.121 [2024-11-05 16:30:51.018064] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.121 [2024-11-05 16:30:51.018087] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:38.121 [2024-11-05 16:30:51.018100] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:38.121 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.121 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:38.121 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.121 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.121 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:38.121 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:38.121 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:38.121 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.121 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.121 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.121 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.121 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.121 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.121 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.121 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:38.121 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.121 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.121 "name": "raid_bdev1", 00:16:38.121 "uuid": "ebbb2d8d-6270-49b8-890b-048f9ab2d4b5", 00:16:38.121 "strip_size_kb": 0, 00:16:38.121 "state": "online", 00:16:38.121 "raid_level": "raid1", 00:16:38.121 "superblock": true, 00:16:38.121 "num_base_bdevs": 4, 00:16:38.121 "num_base_bdevs_discovered": 2, 00:16:38.121 "num_base_bdevs_operational": 2, 00:16:38.121 "base_bdevs_list": [ 00:16:38.121 { 00:16:38.121 "name": null, 00:16:38.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.121 "is_configured": false, 00:16:38.121 "data_offset": 0, 00:16:38.121 "data_size": 63488 00:16:38.121 }, 00:16:38.121 { 00:16:38.121 "name": null, 00:16:38.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.121 "is_configured": false, 00:16:38.121 "data_offset": 2048, 00:16:38.121 "data_size": 63488 00:16:38.121 }, 00:16:38.121 { 00:16:38.121 "name": "BaseBdev3", 00:16:38.121 "uuid": "55d9b37c-3963-5d33-8f01-f3ba0828b8f4", 00:16:38.121 "is_configured": true, 00:16:38.121 "data_offset": 2048, 00:16:38.121 "data_size": 63488 00:16:38.121 }, 00:16:38.121 { 00:16:38.121 "name": "BaseBdev4", 00:16:38.121 "uuid": "bc7900a5-86bb-5c0c-b35d-d945bd741b03", 00:16:38.121 "is_configured": true, 00:16:38.121 "data_offset": 2048, 00:16:38.121 "data_size": 63488 00:16:38.121 } 00:16:38.121 ] 00:16:38.121 }' 00:16:38.121 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.121 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:38.689 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:38.689 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.689 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:38.689 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:38.689 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.689 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.689 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.689 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.689 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:38.689 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.689 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.689 "name": "raid_bdev1", 00:16:38.689 "uuid": "ebbb2d8d-6270-49b8-890b-048f9ab2d4b5", 00:16:38.689 "strip_size_kb": 0, 00:16:38.689 "state": "online", 00:16:38.689 "raid_level": "raid1", 00:16:38.689 "superblock": true, 00:16:38.689 "num_base_bdevs": 4, 00:16:38.689 "num_base_bdevs_discovered": 2, 00:16:38.689 "num_base_bdevs_operational": 2, 00:16:38.689 "base_bdevs_list": [ 00:16:38.689 { 00:16:38.689 "name": null, 00:16:38.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.689 "is_configured": false, 00:16:38.689 "data_offset": 0, 00:16:38.689 "data_size": 63488 00:16:38.689 }, 00:16:38.689 { 00:16:38.689 "name": null, 00:16:38.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.689 "is_configured": false, 00:16:38.689 "data_offset": 2048, 00:16:38.689 "data_size": 63488 00:16:38.689 }, 00:16:38.689 { 00:16:38.689 "name": "BaseBdev3", 00:16:38.689 "uuid": "55d9b37c-3963-5d33-8f01-f3ba0828b8f4", 00:16:38.689 "is_configured": true, 00:16:38.689 "data_offset": 2048, 00:16:38.689 "data_size": 63488 00:16:38.689 }, 00:16:38.689 { 00:16:38.689 "name": "BaseBdev4", 00:16:38.689 "uuid": "bc7900a5-86bb-5c0c-b35d-d945bd741b03", 00:16:38.689 "is_configured": true, 00:16:38.689 "data_offset": 2048, 00:16:38.689 "data_size": 63488 00:16:38.689 } 00:16:38.689 ] 00:16:38.689 }' 00:16:38.689 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.689 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:38.689 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.689 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:38.689 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:38.689 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.689 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:38.689 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.689 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:38.689 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.689 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:38.689 [2024-11-05 16:30:51.691081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:38.689 [2024-11-05 16:30:51.691155] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.689 [2024-11-05 16:30:51.691181] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:16:38.689 [2024-11-05 16:30:51.691200] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.689 [2024-11-05 16:30:51.691747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.689 [2024-11-05 16:30:51.691779] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:38.689 [2024-11-05 16:30:51.691887] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:38.689 [2024-11-05 16:30:51.691921] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:38.689 [2024-11-05 16:30:51.691936] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:38.689 [2024-11-05 16:30:51.691967] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:38.689 BaseBdev1 00:16:38.689 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.689 16:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:39.624 16:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:39.624 16:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:39.624 16:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.624 16:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:39.624 16:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:39.624 16:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:39.624 16:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.624 16:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.624 16:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.624 16:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.624 16:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.624 16:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.624 16:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.624 16:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.882 16:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.882 16:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.882 "name": "raid_bdev1", 00:16:39.882 "uuid": "ebbb2d8d-6270-49b8-890b-048f9ab2d4b5", 00:16:39.882 "strip_size_kb": 0, 00:16:39.882 "state": "online", 00:16:39.882 "raid_level": "raid1", 00:16:39.882 "superblock": true, 00:16:39.882 "num_base_bdevs": 4, 00:16:39.882 "num_base_bdevs_discovered": 2, 00:16:39.882 "num_base_bdevs_operational": 2, 00:16:39.882 "base_bdevs_list": [ 00:16:39.882 { 00:16:39.882 "name": null, 00:16:39.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.882 "is_configured": false, 00:16:39.882 "data_offset": 0, 00:16:39.882 "data_size": 63488 00:16:39.882 }, 00:16:39.882 { 00:16:39.882 "name": null, 00:16:39.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.882 "is_configured": false, 00:16:39.882 "data_offset": 2048, 00:16:39.882 "data_size": 63488 00:16:39.882 }, 00:16:39.882 { 00:16:39.882 "name": "BaseBdev3", 00:16:39.882 "uuid": "55d9b37c-3963-5d33-8f01-f3ba0828b8f4", 00:16:39.882 "is_configured": true, 00:16:39.882 "data_offset": 2048, 00:16:39.882 "data_size": 63488 00:16:39.882 }, 00:16:39.882 { 00:16:39.882 "name": "BaseBdev4", 00:16:39.882 "uuid": "bc7900a5-86bb-5c0c-b35d-d945bd741b03", 00:16:39.882 "is_configured": true, 00:16:39.882 "data_offset": 2048, 00:16:39.882 "data_size": 63488 00:16:39.882 } 00:16:39.883 ] 00:16:39.883 }' 00:16:39.883 16:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.883 16:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:40.139 16:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:40.139 16:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.139 16:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:40.139 16:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:40.139 16:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.139 16:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.139 16:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.139 16:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.139 16:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:40.139 16:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.139 16:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.139 "name": "raid_bdev1", 00:16:40.139 "uuid": "ebbb2d8d-6270-49b8-890b-048f9ab2d4b5", 00:16:40.139 "strip_size_kb": 0, 00:16:40.139 "state": "online", 00:16:40.139 "raid_level": "raid1", 00:16:40.139 "superblock": true, 00:16:40.139 "num_base_bdevs": 4, 00:16:40.139 "num_base_bdevs_discovered": 2, 00:16:40.139 "num_base_bdevs_operational": 2, 00:16:40.139 "base_bdevs_list": [ 00:16:40.139 { 00:16:40.140 "name": null, 00:16:40.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.140 "is_configured": false, 00:16:40.140 "data_offset": 0, 00:16:40.140 "data_size": 63488 00:16:40.140 }, 00:16:40.140 { 00:16:40.140 "name": null, 00:16:40.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.140 "is_configured": false, 00:16:40.140 "data_offset": 2048, 00:16:40.140 "data_size": 63488 00:16:40.140 }, 00:16:40.140 { 00:16:40.140 "name": "BaseBdev3", 00:16:40.140 "uuid": "55d9b37c-3963-5d33-8f01-f3ba0828b8f4", 00:16:40.140 "is_configured": true, 00:16:40.140 "data_offset": 2048, 00:16:40.140 "data_size": 63488 00:16:40.140 }, 00:16:40.140 { 00:16:40.140 "name": "BaseBdev4", 00:16:40.140 "uuid": "bc7900a5-86bb-5c0c-b35d-d945bd741b03", 00:16:40.140 "is_configured": true, 00:16:40.140 "data_offset": 2048, 00:16:40.140 "data_size": 63488 00:16:40.140 } 00:16:40.140 ] 00:16:40.140 }' 00:16:40.140 16:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.140 16:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:40.140 16:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.398 16:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:40.398 16:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:40.398 16:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:16:40.398 16:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:40.398 16:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:40.398 16:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:40.398 16:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:40.398 16:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:40.398 16:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:40.398 16:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.398 16:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:40.398 [2024-11-05 16:30:53.268827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:40.398 [2024-11-05 16:30:53.269168] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:40.398 [2024-11-05 16:30:53.269261] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:40.398 request: 00:16:40.398 { 00:16:40.398 "base_bdev": "BaseBdev1", 00:16:40.398 "raid_bdev": "raid_bdev1", 00:16:40.398 "method": "bdev_raid_add_base_bdev", 00:16:40.398 "req_id": 1 00:16:40.398 } 00:16:40.398 Got JSON-RPC error response 00:16:40.398 response: 00:16:40.398 { 00:16:40.398 "code": -22, 00:16:40.398 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:40.398 } 00:16:40.398 16:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:40.398 16:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:16:40.398 16:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:40.398 16:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:40.398 16:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:40.398 16:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:41.333 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:41.333 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.333 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.333 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:41.333 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:41.333 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:41.333 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.333 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.333 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.333 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.333 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.333 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.333 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.333 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:41.333 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.333 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.333 "name": "raid_bdev1", 00:16:41.333 "uuid": "ebbb2d8d-6270-49b8-890b-048f9ab2d4b5", 00:16:41.333 "strip_size_kb": 0, 00:16:41.333 "state": "online", 00:16:41.333 "raid_level": "raid1", 00:16:41.333 "superblock": true, 00:16:41.333 "num_base_bdevs": 4, 00:16:41.333 "num_base_bdevs_discovered": 2, 00:16:41.333 "num_base_bdevs_operational": 2, 00:16:41.333 "base_bdevs_list": [ 00:16:41.333 { 00:16:41.333 "name": null, 00:16:41.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.333 "is_configured": false, 00:16:41.333 "data_offset": 0, 00:16:41.333 "data_size": 63488 00:16:41.333 }, 00:16:41.333 { 00:16:41.333 "name": null, 00:16:41.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.333 "is_configured": false, 00:16:41.333 "data_offset": 2048, 00:16:41.333 "data_size": 63488 00:16:41.333 }, 00:16:41.333 { 00:16:41.333 "name": "BaseBdev3", 00:16:41.333 "uuid": "55d9b37c-3963-5d33-8f01-f3ba0828b8f4", 00:16:41.333 "is_configured": true, 00:16:41.333 "data_offset": 2048, 00:16:41.333 "data_size": 63488 00:16:41.333 }, 00:16:41.333 { 00:16:41.334 "name": "BaseBdev4", 00:16:41.334 "uuid": "bc7900a5-86bb-5c0c-b35d-d945bd741b03", 00:16:41.334 "is_configured": true, 00:16:41.334 "data_offset": 2048, 00:16:41.334 "data_size": 63488 00:16:41.334 } 00:16:41.334 ] 00:16:41.334 }' 00:16:41.334 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.334 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:41.592 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:41.592 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:41.592 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:41.592 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:41.592 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:41.592 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.592 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.592 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:41.592 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.850 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.850 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:41.850 "name": "raid_bdev1", 00:16:41.850 "uuid": "ebbb2d8d-6270-49b8-890b-048f9ab2d4b5", 00:16:41.850 "strip_size_kb": 0, 00:16:41.850 "state": "online", 00:16:41.850 "raid_level": "raid1", 00:16:41.850 "superblock": true, 00:16:41.850 "num_base_bdevs": 4, 00:16:41.850 "num_base_bdevs_discovered": 2, 00:16:41.850 "num_base_bdevs_operational": 2, 00:16:41.850 "base_bdevs_list": [ 00:16:41.850 { 00:16:41.850 "name": null, 00:16:41.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.850 "is_configured": false, 00:16:41.850 "data_offset": 0, 00:16:41.850 "data_size": 63488 00:16:41.850 }, 00:16:41.850 { 00:16:41.850 "name": null, 00:16:41.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.850 "is_configured": false, 00:16:41.850 "data_offset": 2048, 00:16:41.850 "data_size": 63488 00:16:41.850 }, 00:16:41.850 { 00:16:41.850 "name": "BaseBdev3", 00:16:41.850 "uuid": "55d9b37c-3963-5d33-8f01-f3ba0828b8f4", 00:16:41.850 "is_configured": true, 00:16:41.850 "data_offset": 2048, 00:16:41.850 "data_size": 63488 00:16:41.850 }, 00:16:41.850 { 00:16:41.850 "name": "BaseBdev4", 00:16:41.850 "uuid": "bc7900a5-86bb-5c0c-b35d-d945bd741b03", 00:16:41.850 "is_configured": true, 00:16:41.850 "data_offset": 2048, 00:16:41.850 "data_size": 63488 00:16:41.850 } 00:16:41.850 ] 00:16:41.850 }' 00:16:41.850 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:41.850 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:41.850 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:41.850 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:41.850 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79504 00:16:41.850 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # '[' -z 79504 ']' 00:16:41.850 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # kill -0 79504 00:16:41.850 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # uname 00:16:41.850 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:41.850 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79504 00:16:41.850 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:41.850 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:41.850 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79504' 00:16:41.850 killing process with pid 79504 00:16:41.850 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # kill 79504 00:16:41.850 Received shutdown signal, test time was about 18.240005 seconds 00:16:41.850 00:16:41.850 Latency(us) 00:16:41.850 [2024-11-05T16:30:54.938Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:41.850 [2024-11-05T16:30:54.938Z] =================================================================================================================== 00:16:41.850 [2024-11-05T16:30:54.938Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:41.850 16:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@976 -- # wait 79504 00:16:41.850 [2024-11-05 16:30:54.806159] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:41.850 [2024-11-05 16:30:54.806347] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:41.850 [2024-11-05 16:30:54.806471] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:41.850 [2024-11-05 16:30:54.806539] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:42.417 [2024-11-05 16:30:55.290790] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:43.797 16:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:43.797 00:16:43.797 real 0m22.002s 00:16:43.797 user 0m28.744s 00:16:43.797 sys 0m2.602s 00:16:43.797 16:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:43.797 16:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:43.797 ************************************ 00:16:43.797 END TEST raid_rebuild_test_sb_io 00:16:43.797 ************************************ 00:16:43.797 16:30:56 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:43.797 16:30:56 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:16:43.797 16:30:56 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:43.797 16:30:56 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:43.797 16:30:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:43.797 ************************************ 00:16:43.797 START TEST raid5f_state_function_test 00:16:43.797 ************************************ 00:16:43.797 16:30:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 3 false 00:16:43.797 16:30:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:43.797 16:30:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:16:43.797 16:30:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:43.797 16:30:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:43.797 16:30:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:43.797 16:30:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:43.797 16:30:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:43.797 16:30:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:43.797 16:30:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:43.797 16:30:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:43.797 16:30:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:43.797 16:30:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:43.797 16:30:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:43.797 16:30:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:43.797 16:30:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:43.797 16:30:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:43.797 16:30:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:43.797 16:30:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:43.797 16:30:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:43.797 16:30:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:43.797 16:30:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:43.797 16:30:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:43.797 16:30:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:43.797 16:30:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:43.797 16:30:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:43.797 16:30:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:43.797 16:30:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80231 00:16:43.797 16:30:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:43.797 16:30:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80231' 00:16:43.797 Process raid pid: 80231 00:16:43.797 16:30:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80231 00:16:43.797 16:30:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 80231 ']' 00:16:43.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.797 16:30:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.797 16:30:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:43.797 16:30:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.797 16:30:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:43.797 16:30:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.797 [2024-11-05 16:30:56.853994] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:16:43.797 [2024-11-05 16:30:56.854169] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:44.056 [2024-11-05 16:30:57.030807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.314 [2024-11-05 16:30:57.157034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.314 [2024-11-05 16:30:57.389964] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:44.314 [2024-11-05 16:30:57.390109] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:44.894 16:30:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:44.894 16:30:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:16:44.894 16:30:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:44.894 16:30:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.894 16:30:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.894 [2024-11-05 16:30:57.735483] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:44.894 [2024-11-05 16:30:57.735631] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:44.894 [2024-11-05 16:30:57.735649] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:44.894 [2024-11-05 16:30:57.735660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:44.894 [2024-11-05 16:30:57.735667] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:44.894 [2024-11-05 16:30:57.735677] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:44.894 16:30:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.894 16:30:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:44.894 16:30:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:44.894 16:30:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:44.894 16:30:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:44.894 16:30:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:44.894 16:30:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:44.894 16:30:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.894 16:30:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.894 16:30:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.894 16:30:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.894 16:30:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.894 16:30:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.894 16:30:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.894 16:30:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.894 16:30:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.894 16:30:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.894 "name": "Existed_Raid", 00:16:44.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.894 "strip_size_kb": 64, 00:16:44.894 "state": "configuring", 00:16:44.894 "raid_level": "raid5f", 00:16:44.894 "superblock": false, 00:16:44.894 "num_base_bdevs": 3, 00:16:44.894 "num_base_bdevs_discovered": 0, 00:16:44.894 "num_base_bdevs_operational": 3, 00:16:44.894 "base_bdevs_list": [ 00:16:44.894 { 00:16:44.894 "name": "BaseBdev1", 00:16:44.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.894 "is_configured": false, 00:16:44.894 "data_offset": 0, 00:16:44.894 "data_size": 0 00:16:44.894 }, 00:16:44.894 { 00:16:44.894 "name": "BaseBdev2", 00:16:44.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.894 "is_configured": false, 00:16:44.894 "data_offset": 0, 00:16:44.894 "data_size": 0 00:16:44.894 }, 00:16:44.894 { 00:16:44.894 "name": "BaseBdev3", 00:16:44.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.894 "is_configured": false, 00:16:44.894 "data_offset": 0, 00:16:44.894 "data_size": 0 00:16:44.894 } 00:16:44.894 ] 00:16:44.894 }' 00:16:44.894 16:30:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.894 16:30:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.154 16:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:45.154 16:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.154 16:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.154 [2024-11-05 16:30:58.174689] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:45.154 [2024-11-05 16:30:58.174785] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:45.154 16:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.154 16:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:45.154 16:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.154 16:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.154 [2024-11-05 16:30:58.186654] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:45.154 [2024-11-05 16:30:58.186737] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:45.154 [2024-11-05 16:30:58.186765] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:45.154 [2024-11-05 16:30:58.186789] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:45.154 [2024-11-05 16:30:58.186808] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:45.154 [2024-11-05 16:30:58.186828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:45.154 16:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.154 16:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:45.154 16:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.154 16:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.154 [2024-11-05 16:30:58.235671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:45.154 BaseBdev1 00:16:45.154 16:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.154 16:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:45.154 16:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:45.154 16:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:45.154 16:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:45.154 16:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:45.154 16:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:45.154 16:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:45.154 16:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.154 16:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.414 16:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.414 16:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:45.414 16:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.414 16:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.414 [ 00:16:45.414 { 00:16:45.414 "name": "BaseBdev1", 00:16:45.414 "aliases": [ 00:16:45.414 "7632333a-3337-444a-8218-ef08b89a6ff0" 00:16:45.414 ], 00:16:45.414 "product_name": "Malloc disk", 00:16:45.414 "block_size": 512, 00:16:45.414 "num_blocks": 65536, 00:16:45.414 "uuid": "7632333a-3337-444a-8218-ef08b89a6ff0", 00:16:45.414 "assigned_rate_limits": { 00:16:45.414 "rw_ios_per_sec": 0, 00:16:45.414 "rw_mbytes_per_sec": 0, 00:16:45.414 "r_mbytes_per_sec": 0, 00:16:45.414 "w_mbytes_per_sec": 0 00:16:45.414 }, 00:16:45.414 "claimed": true, 00:16:45.414 "claim_type": "exclusive_write", 00:16:45.414 "zoned": false, 00:16:45.414 "supported_io_types": { 00:16:45.414 "read": true, 00:16:45.414 "write": true, 00:16:45.414 "unmap": true, 00:16:45.414 "flush": true, 00:16:45.414 "reset": true, 00:16:45.414 "nvme_admin": false, 00:16:45.414 "nvme_io": false, 00:16:45.414 "nvme_io_md": false, 00:16:45.414 "write_zeroes": true, 00:16:45.414 "zcopy": true, 00:16:45.414 "get_zone_info": false, 00:16:45.414 "zone_management": false, 00:16:45.414 "zone_append": false, 00:16:45.414 "compare": false, 00:16:45.414 "compare_and_write": false, 00:16:45.414 "abort": true, 00:16:45.414 "seek_hole": false, 00:16:45.414 "seek_data": false, 00:16:45.414 "copy": true, 00:16:45.414 "nvme_iov_md": false 00:16:45.414 }, 00:16:45.414 "memory_domains": [ 00:16:45.414 { 00:16:45.414 "dma_device_id": "system", 00:16:45.414 "dma_device_type": 1 00:16:45.414 }, 00:16:45.414 { 00:16:45.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.414 "dma_device_type": 2 00:16:45.414 } 00:16:45.414 ], 00:16:45.414 "driver_specific": {} 00:16:45.414 } 00:16:45.414 ] 00:16:45.414 16:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.414 16:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:45.414 16:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:45.414 16:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:45.414 16:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:45.414 16:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.414 16:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.414 16:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:45.414 16:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.414 16:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.414 16:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.414 16:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.414 16:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.414 16:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.414 16:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.414 16:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.414 16:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.414 16:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.414 "name": "Existed_Raid", 00:16:45.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.414 "strip_size_kb": 64, 00:16:45.414 "state": "configuring", 00:16:45.414 "raid_level": "raid5f", 00:16:45.414 "superblock": false, 00:16:45.414 "num_base_bdevs": 3, 00:16:45.414 "num_base_bdevs_discovered": 1, 00:16:45.414 "num_base_bdevs_operational": 3, 00:16:45.414 "base_bdevs_list": [ 00:16:45.414 { 00:16:45.414 "name": "BaseBdev1", 00:16:45.414 "uuid": "7632333a-3337-444a-8218-ef08b89a6ff0", 00:16:45.414 "is_configured": true, 00:16:45.414 "data_offset": 0, 00:16:45.414 "data_size": 65536 00:16:45.414 }, 00:16:45.414 { 00:16:45.414 "name": "BaseBdev2", 00:16:45.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.414 "is_configured": false, 00:16:45.414 "data_offset": 0, 00:16:45.414 "data_size": 0 00:16:45.414 }, 00:16:45.414 { 00:16:45.414 "name": "BaseBdev3", 00:16:45.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.414 "is_configured": false, 00:16:45.414 "data_offset": 0, 00:16:45.414 "data_size": 0 00:16:45.414 } 00:16:45.414 ] 00:16:45.414 }' 00:16:45.414 16:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.414 16:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.983 16:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:45.983 16:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.983 16:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.983 [2024-11-05 16:30:58.770837] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:45.983 [2024-11-05 16:30:58.770898] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:45.983 16:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.983 16:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:45.983 16:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.983 16:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.983 [2024-11-05 16:30:58.782868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:45.983 [2024-11-05 16:30:58.785083] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:45.983 [2024-11-05 16:30:58.785133] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:45.983 [2024-11-05 16:30:58.785146] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:45.983 [2024-11-05 16:30:58.785157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:45.983 16:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.983 16:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:45.983 16:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:45.983 16:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:45.983 16:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:45.983 16:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:45.983 16:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.983 16:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.983 16:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:45.983 16:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.983 16:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.983 16:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.983 16:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.983 16:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.983 16:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.983 16:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.983 16:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.983 16:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.983 16:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.983 "name": "Existed_Raid", 00:16:45.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.983 "strip_size_kb": 64, 00:16:45.983 "state": "configuring", 00:16:45.983 "raid_level": "raid5f", 00:16:45.983 "superblock": false, 00:16:45.983 "num_base_bdevs": 3, 00:16:45.983 "num_base_bdevs_discovered": 1, 00:16:45.983 "num_base_bdevs_operational": 3, 00:16:45.983 "base_bdevs_list": [ 00:16:45.983 { 00:16:45.983 "name": "BaseBdev1", 00:16:45.983 "uuid": "7632333a-3337-444a-8218-ef08b89a6ff0", 00:16:45.983 "is_configured": true, 00:16:45.983 "data_offset": 0, 00:16:45.983 "data_size": 65536 00:16:45.983 }, 00:16:45.983 { 00:16:45.983 "name": "BaseBdev2", 00:16:45.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.983 "is_configured": false, 00:16:45.983 "data_offset": 0, 00:16:45.983 "data_size": 0 00:16:45.983 }, 00:16:45.983 { 00:16:45.983 "name": "BaseBdev3", 00:16:45.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.983 "is_configured": false, 00:16:45.983 "data_offset": 0, 00:16:45.983 "data_size": 0 00:16:45.983 } 00:16:45.983 ] 00:16:45.983 }' 00:16:45.983 16:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.983 16:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.243 16:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:46.243 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.243 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.243 [2024-11-05 16:30:59.306913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:46.243 BaseBdev2 00:16:46.243 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.243 16:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:46.243 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:46.243 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:46.243 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:46.243 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:46.243 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:46.243 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:46.243 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.243 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.243 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.243 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:46.243 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.243 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.502 [ 00:16:46.502 { 00:16:46.502 "name": "BaseBdev2", 00:16:46.502 "aliases": [ 00:16:46.502 "5c8becbf-8708-456b-8a0c-04696c47a696" 00:16:46.502 ], 00:16:46.502 "product_name": "Malloc disk", 00:16:46.502 "block_size": 512, 00:16:46.502 "num_blocks": 65536, 00:16:46.502 "uuid": "5c8becbf-8708-456b-8a0c-04696c47a696", 00:16:46.502 "assigned_rate_limits": { 00:16:46.502 "rw_ios_per_sec": 0, 00:16:46.502 "rw_mbytes_per_sec": 0, 00:16:46.502 "r_mbytes_per_sec": 0, 00:16:46.502 "w_mbytes_per_sec": 0 00:16:46.502 }, 00:16:46.502 "claimed": true, 00:16:46.502 "claim_type": "exclusive_write", 00:16:46.502 "zoned": false, 00:16:46.502 "supported_io_types": { 00:16:46.502 "read": true, 00:16:46.502 "write": true, 00:16:46.502 "unmap": true, 00:16:46.502 "flush": true, 00:16:46.502 "reset": true, 00:16:46.502 "nvme_admin": false, 00:16:46.502 "nvme_io": false, 00:16:46.502 "nvme_io_md": false, 00:16:46.502 "write_zeroes": true, 00:16:46.502 "zcopy": true, 00:16:46.502 "get_zone_info": false, 00:16:46.502 "zone_management": false, 00:16:46.502 "zone_append": false, 00:16:46.503 "compare": false, 00:16:46.503 "compare_and_write": false, 00:16:46.503 "abort": true, 00:16:46.503 "seek_hole": false, 00:16:46.503 "seek_data": false, 00:16:46.503 "copy": true, 00:16:46.503 "nvme_iov_md": false 00:16:46.503 }, 00:16:46.503 "memory_domains": [ 00:16:46.503 { 00:16:46.503 "dma_device_id": "system", 00:16:46.503 "dma_device_type": 1 00:16:46.503 }, 00:16:46.503 { 00:16:46.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.503 "dma_device_type": 2 00:16:46.503 } 00:16:46.503 ], 00:16:46.503 "driver_specific": {} 00:16:46.503 } 00:16:46.503 ] 00:16:46.503 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.503 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:46.503 16:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:46.503 16:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:46.503 16:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:46.503 16:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:46.503 16:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:46.503 16:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.503 16:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.503 16:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:46.503 16:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.503 16:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.503 16:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.503 16:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.503 16:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.503 16:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.503 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.503 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.503 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.503 16:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.503 "name": "Existed_Raid", 00:16:46.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.503 "strip_size_kb": 64, 00:16:46.503 "state": "configuring", 00:16:46.503 "raid_level": "raid5f", 00:16:46.503 "superblock": false, 00:16:46.503 "num_base_bdevs": 3, 00:16:46.503 "num_base_bdevs_discovered": 2, 00:16:46.503 "num_base_bdevs_operational": 3, 00:16:46.503 "base_bdevs_list": [ 00:16:46.503 { 00:16:46.503 "name": "BaseBdev1", 00:16:46.503 "uuid": "7632333a-3337-444a-8218-ef08b89a6ff0", 00:16:46.503 "is_configured": true, 00:16:46.503 "data_offset": 0, 00:16:46.503 "data_size": 65536 00:16:46.503 }, 00:16:46.503 { 00:16:46.503 "name": "BaseBdev2", 00:16:46.503 "uuid": "5c8becbf-8708-456b-8a0c-04696c47a696", 00:16:46.503 "is_configured": true, 00:16:46.503 "data_offset": 0, 00:16:46.503 "data_size": 65536 00:16:46.503 }, 00:16:46.503 { 00:16:46.503 "name": "BaseBdev3", 00:16:46.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.503 "is_configured": false, 00:16:46.503 "data_offset": 0, 00:16:46.503 "data_size": 0 00:16:46.503 } 00:16:46.503 ] 00:16:46.503 }' 00:16:46.503 16:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.503 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.761 16:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:46.761 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.761 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.018 [2024-11-05 16:30:59.865761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:47.018 [2024-11-05 16:30:59.865847] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:47.018 [2024-11-05 16:30:59.865865] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:47.018 [2024-11-05 16:30:59.866181] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:47.018 [2024-11-05 16:30:59.873234] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:47.018 [2024-11-05 16:30:59.873331] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:47.018 [2024-11-05 16:30:59.873667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.018 BaseBdev3 00:16:47.018 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.018 16:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:47.018 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:47.018 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:47.018 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:47.018 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:47.018 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:47.018 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:47.018 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.018 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.018 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.018 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:47.018 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.018 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.018 [ 00:16:47.019 { 00:16:47.019 "name": "BaseBdev3", 00:16:47.019 "aliases": [ 00:16:47.019 "401cb329-47bc-45d9-81fa-5b7648ad456c" 00:16:47.019 ], 00:16:47.019 "product_name": "Malloc disk", 00:16:47.019 "block_size": 512, 00:16:47.019 "num_blocks": 65536, 00:16:47.019 "uuid": "401cb329-47bc-45d9-81fa-5b7648ad456c", 00:16:47.019 "assigned_rate_limits": { 00:16:47.019 "rw_ios_per_sec": 0, 00:16:47.019 "rw_mbytes_per_sec": 0, 00:16:47.019 "r_mbytes_per_sec": 0, 00:16:47.019 "w_mbytes_per_sec": 0 00:16:47.019 }, 00:16:47.019 "claimed": true, 00:16:47.019 "claim_type": "exclusive_write", 00:16:47.019 "zoned": false, 00:16:47.019 "supported_io_types": { 00:16:47.019 "read": true, 00:16:47.019 "write": true, 00:16:47.019 "unmap": true, 00:16:47.019 "flush": true, 00:16:47.019 "reset": true, 00:16:47.019 "nvme_admin": false, 00:16:47.019 "nvme_io": false, 00:16:47.019 "nvme_io_md": false, 00:16:47.019 "write_zeroes": true, 00:16:47.019 "zcopy": true, 00:16:47.019 "get_zone_info": false, 00:16:47.019 "zone_management": false, 00:16:47.019 "zone_append": false, 00:16:47.019 "compare": false, 00:16:47.019 "compare_and_write": false, 00:16:47.019 "abort": true, 00:16:47.019 "seek_hole": false, 00:16:47.019 "seek_data": false, 00:16:47.019 "copy": true, 00:16:47.019 "nvme_iov_md": false 00:16:47.019 }, 00:16:47.019 "memory_domains": [ 00:16:47.019 { 00:16:47.019 "dma_device_id": "system", 00:16:47.019 "dma_device_type": 1 00:16:47.019 }, 00:16:47.019 { 00:16:47.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.019 "dma_device_type": 2 00:16:47.019 } 00:16:47.019 ], 00:16:47.019 "driver_specific": {} 00:16:47.019 } 00:16:47.019 ] 00:16:47.019 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.019 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:47.019 16:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:47.019 16:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:47.019 16:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:47.019 16:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:47.019 16:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.019 16:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.019 16:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.019 16:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:47.019 16:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.019 16:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.019 16:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.019 16:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.019 16:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.019 16:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.019 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.019 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.019 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.019 16:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.019 "name": "Existed_Raid", 00:16:47.019 "uuid": "5a77cd44-1d37-4438-a6be-d70ee54ac836", 00:16:47.019 "strip_size_kb": 64, 00:16:47.019 "state": "online", 00:16:47.019 "raid_level": "raid5f", 00:16:47.019 "superblock": false, 00:16:47.019 "num_base_bdevs": 3, 00:16:47.019 "num_base_bdevs_discovered": 3, 00:16:47.019 "num_base_bdevs_operational": 3, 00:16:47.019 "base_bdevs_list": [ 00:16:47.019 { 00:16:47.019 "name": "BaseBdev1", 00:16:47.019 "uuid": "7632333a-3337-444a-8218-ef08b89a6ff0", 00:16:47.019 "is_configured": true, 00:16:47.019 "data_offset": 0, 00:16:47.019 "data_size": 65536 00:16:47.019 }, 00:16:47.019 { 00:16:47.019 "name": "BaseBdev2", 00:16:47.019 "uuid": "5c8becbf-8708-456b-8a0c-04696c47a696", 00:16:47.019 "is_configured": true, 00:16:47.019 "data_offset": 0, 00:16:47.019 "data_size": 65536 00:16:47.019 }, 00:16:47.019 { 00:16:47.019 "name": "BaseBdev3", 00:16:47.019 "uuid": "401cb329-47bc-45d9-81fa-5b7648ad456c", 00:16:47.019 "is_configured": true, 00:16:47.019 "data_offset": 0, 00:16:47.019 "data_size": 65536 00:16:47.019 } 00:16:47.019 ] 00:16:47.019 }' 00:16:47.019 16:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.019 16:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.276 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:47.276 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:47.276 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:47.276 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:47.276 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:47.276 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:47.276 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:47.276 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:47.276 16:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.276 16:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.276 [2024-11-05 16:31:00.364927] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:47.535 16:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.535 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:47.535 "name": "Existed_Raid", 00:16:47.535 "aliases": [ 00:16:47.535 "5a77cd44-1d37-4438-a6be-d70ee54ac836" 00:16:47.535 ], 00:16:47.535 "product_name": "Raid Volume", 00:16:47.535 "block_size": 512, 00:16:47.535 "num_blocks": 131072, 00:16:47.535 "uuid": "5a77cd44-1d37-4438-a6be-d70ee54ac836", 00:16:47.535 "assigned_rate_limits": { 00:16:47.535 "rw_ios_per_sec": 0, 00:16:47.535 "rw_mbytes_per_sec": 0, 00:16:47.535 "r_mbytes_per_sec": 0, 00:16:47.535 "w_mbytes_per_sec": 0 00:16:47.535 }, 00:16:47.535 "claimed": false, 00:16:47.535 "zoned": false, 00:16:47.535 "supported_io_types": { 00:16:47.535 "read": true, 00:16:47.535 "write": true, 00:16:47.535 "unmap": false, 00:16:47.535 "flush": false, 00:16:47.535 "reset": true, 00:16:47.535 "nvme_admin": false, 00:16:47.535 "nvme_io": false, 00:16:47.535 "nvme_io_md": false, 00:16:47.535 "write_zeroes": true, 00:16:47.535 "zcopy": false, 00:16:47.535 "get_zone_info": false, 00:16:47.535 "zone_management": false, 00:16:47.535 "zone_append": false, 00:16:47.535 "compare": false, 00:16:47.535 "compare_and_write": false, 00:16:47.535 "abort": false, 00:16:47.536 "seek_hole": false, 00:16:47.536 "seek_data": false, 00:16:47.536 "copy": false, 00:16:47.536 "nvme_iov_md": false 00:16:47.536 }, 00:16:47.536 "driver_specific": { 00:16:47.536 "raid": { 00:16:47.536 "uuid": "5a77cd44-1d37-4438-a6be-d70ee54ac836", 00:16:47.536 "strip_size_kb": 64, 00:16:47.536 "state": "online", 00:16:47.536 "raid_level": "raid5f", 00:16:47.536 "superblock": false, 00:16:47.536 "num_base_bdevs": 3, 00:16:47.536 "num_base_bdevs_discovered": 3, 00:16:47.536 "num_base_bdevs_operational": 3, 00:16:47.536 "base_bdevs_list": [ 00:16:47.536 { 00:16:47.536 "name": "BaseBdev1", 00:16:47.536 "uuid": "7632333a-3337-444a-8218-ef08b89a6ff0", 00:16:47.536 "is_configured": true, 00:16:47.536 "data_offset": 0, 00:16:47.536 "data_size": 65536 00:16:47.536 }, 00:16:47.536 { 00:16:47.536 "name": "BaseBdev2", 00:16:47.536 "uuid": "5c8becbf-8708-456b-8a0c-04696c47a696", 00:16:47.536 "is_configured": true, 00:16:47.536 "data_offset": 0, 00:16:47.536 "data_size": 65536 00:16:47.536 }, 00:16:47.536 { 00:16:47.536 "name": "BaseBdev3", 00:16:47.536 "uuid": "401cb329-47bc-45d9-81fa-5b7648ad456c", 00:16:47.536 "is_configured": true, 00:16:47.536 "data_offset": 0, 00:16:47.536 "data_size": 65536 00:16:47.536 } 00:16:47.536 ] 00:16:47.536 } 00:16:47.536 } 00:16:47.536 }' 00:16:47.536 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:47.536 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:47.536 BaseBdev2 00:16:47.536 BaseBdev3' 00:16:47.536 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:47.536 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:47.536 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:47.536 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:47.536 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:47.536 16:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.536 16:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.536 16:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.536 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:47.536 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:47.536 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:47.536 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:47.536 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:47.536 16:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.536 16:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.536 16:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.536 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:47.536 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:47.536 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:47.536 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:47.536 16:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.536 16:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.536 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:47.536 16:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.536 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:47.536 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:47.536 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:47.536 16:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.536 16:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.536 [2024-11-05 16:31:00.604745] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:47.796 16:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.796 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:47.796 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:47.796 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:47.796 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:47.796 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:47.796 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:16:47.796 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:47.796 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.796 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.796 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.796 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:47.796 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.796 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.796 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.796 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.796 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.796 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.796 16:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.796 16:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.796 16:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.796 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.796 "name": "Existed_Raid", 00:16:47.796 "uuid": "5a77cd44-1d37-4438-a6be-d70ee54ac836", 00:16:47.796 "strip_size_kb": 64, 00:16:47.796 "state": "online", 00:16:47.796 "raid_level": "raid5f", 00:16:47.796 "superblock": false, 00:16:47.796 "num_base_bdevs": 3, 00:16:47.796 "num_base_bdevs_discovered": 2, 00:16:47.796 "num_base_bdevs_operational": 2, 00:16:47.796 "base_bdevs_list": [ 00:16:47.796 { 00:16:47.796 "name": null, 00:16:47.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.796 "is_configured": false, 00:16:47.796 "data_offset": 0, 00:16:47.796 "data_size": 65536 00:16:47.796 }, 00:16:47.796 { 00:16:47.796 "name": "BaseBdev2", 00:16:47.796 "uuid": "5c8becbf-8708-456b-8a0c-04696c47a696", 00:16:47.796 "is_configured": true, 00:16:47.796 "data_offset": 0, 00:16:47.796 "data_size": 65536 00:16:47.796 }, 00:16:47.796 { 00:16:47.796 "name": "BaseBdev3", 00:16:47.796 "uuid": "401cb329-47bc-45d9-81fa-5b7648ad456c", 00:16:47.796 "is_configured": true, 00:16:47.796 "data_offset": 0, 00:16:47.796 "data_size": 65536 00:16:47.796 } 00:16:47.796 ] 00:16:47.796 }' 00:16:47.796 16:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.796 16:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.364 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:48.365 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:48.365 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:48.365 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.365 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.365 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.365 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.365 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:48.365 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:48.365 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:48.365 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.365 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.365 [2024-11-05 16:31:01.236521] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:48.365 [2024-11-05 16:31:01.236716] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:48.365 [2024-11-05 16:31:01.353038] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:48.365 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.365 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:48.365 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:48.365 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.365 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:48.365 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.365 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.365 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.365 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:48.365 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:48.365 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:48.365 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.365 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.365 [2024-11-05 16:31:01.413036] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:48.365 [2024-11-05 16:31:01.413097] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:48.625 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.625 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:48.625 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:48.625 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.625 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:48.625 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.625 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.625 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.625 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:48.625 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:48.625 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:16:48.625 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:48.625 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:48.625 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:48.625 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.625 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.625 BaseBdev2 00:16:48.625 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.625 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:48.625 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:48.625 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:48.625 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:48.625 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:48.625 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:48.625 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:48.625 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.625 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.625 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.625 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:48.625 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.625 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.625 [ 00:16:48.625 { 00:16:48.625 "name": "BaseBdev2", 00:16:48.625 "aliases": [ 00:16:48.625 "a182d088-2580-4c87-9617-97df07fdd1b2" 00:16:48.625 ], 00:16:48.625 "product_name": "Malloc disk", 00:16:48.625 "block_size": 512, 00:16:48.625 "num_blocks": 65536, 00:16:48.625 "uuid": "a182d088-2580-4c87-9617-97df07fdd1b2", 00:16:48.625 "assigned_rate_limits": { 00:16:48.625 "rw_ios_per_sec": 0, 00:16:48.625 "rw_mbytes_per_sec": 0, 00:16:48.625 "r_mbytes_per_sec": 0, 00:16:48.625 "w_mbytes_per_sec": 0 00:16:48.625 }, 00:16:48.625 "claimed": false, 00:16:48.625 "zoned": false, 00:16:48.625 "supported_io_types": { 00:16:48.625 "read": true, 00:16:48.625 "write": true, 00:16:48.625 "unmap": true, 00:16:48.625 "flush": true, 00:16:48.625 "reset": true, 00:16:48.625 "nvme_admin": false, 00:16:48.625 "nvme_io": false, 00:16:48.625 "nvme_io_md": false, 00:16:48.625 "write_zeroes": true, 00:16:48.625 "zcopy": true, 00:16:48.625 "get_zone_info": false, 00:16:48.625 "zone_management": false, 00:16:48.625 "zone_append": false, 00:16:48.625 "compare": false, 00:16:48.625 "compare_and_write": false, 00:16:48.625 "abort": true, 00:16:48.625 "seek_hole": false, 00:16:48.625 "seek_data": false, 00:16:48.625 "copy": true, 00:16:48.625 "nvme_iov_md": false 00:16:48.625 }, 00:16:48.625 "memory_domains": [ 00:16:48.625 { 00:16:48.625 "dma_device_id": "system", 00:16:48.625 "dma_device_type": 1 00:16:48.625 }, 00:16:48.625 { 00:16:48.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.625 "dma_device_type": 2 00:16:48.625 } 00:16:48.625 ], 00:16:48.625 "driver_specific": {} 00:16:48.625 } 00:16:48.625 ] 00:16:48.625 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.625 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:48.625 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:48.625 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:48.625 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:48.625 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.625 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.885 BaseBdev3 00:16:48.885 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.885 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:48.885 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:48.885 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:48.885 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:48.885 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:48.885 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:48.885 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:48.885 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.885 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.885 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.885 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:48.885 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.885 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.885 [ 00:16:48.885 { 00:16:48.885 "name": "BaseBdev3", 00:16:48.885 "aliases": [ 00:16:48.885 "2cd7ab67-96cb-4200-ad6d-eb89a69f268a" 00:16:48.885 ], 00:16:48.885 "product_name": "Malloc disk", 00:16:48.885 "block_size": 512, 00:16:48.885 "num_blocks": 65536, 00:16:48.885 "uuid": "2cd7ab67-96cb-4200-ad6d-eb89a69f268a", 00:16:48.885 "assigned_rate_limits": { 00:16:48.885 "rw_ios_per_sec": 0, 00:16:48.885 "rw_mbytes_per_sec": 0, 00:16:48.885 "r_mbytes_per_sec": 0, 00:16:48.885 "w_mbytes_per_sec": 0 00:16:48.885 }, 00:16:48.885 "claimed": false, 00:16:48.885 "zoned": false, 00:16:48.885 "supported_io_types": { 00:16:48.885 "read": true, 00:16:48.885 "write": true, 00:16:48.885 "unmap": true, 00:16:48.885 "flush": true, 00:16:48.885 "reset": true, 00:16:48.885 "nvme_admin": false, 00:16:48.885 "nvme_io": false, 00:16:48.885 "nvme_io_md": false, 00:16:48.885 "write_zeroes": true, 00:16:48.885 "zcopy": true, 00:16:48.885 "get_zone_info": false, 00:16:48.885 "zone_management": false, 00:16:48.885 "zone_append": false, 00:16:48.885 "compare": false, 00:16:48.885 "compare_and_write": false, 00:16:48.885 "abort": true, 00:16:48.885 "seek_hole": false, 00:16:48.885 "seek_data": false, 00:16:48.885 "copy": true, 00:16:48.885 "nvme_iov_md": false 00:16:48.885 }, 00:16:48.885 "memory_domains": [ 00:16:48.885 { 00:16:48.885 "dma_device_id": "system", 00:16:48.886 "dma_device_type": 1 00:16:48.886 }, 00:16:48.886 { 00:16:48.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.886 "dma_device_type": 2 00:16:48.886 } 00:16:48.886 ], 00:16:48.886 "driver_specific": {} 00:16:48.886 } 00:16:48.886 ] 00:16:48.886 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.886 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:48.886 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:48.886 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:48.886 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:48.886 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.886 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.886 [2024-11-05 16:31:01.768432] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:48.886 [2024-11-05 16:31:01.768546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:48.886 [2024-11-05 16:31:01.768606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:48.886 [2024-11-05 16:31:01.770780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:48.886 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.886 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:48.886 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:48.886 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:48.886 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:48.886 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.886 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:48.886 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.886 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.886 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.886 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.886 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.886 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.886 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.886 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.886 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.886 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.886 "name": "Existed_Raid", 00:16:48.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.886 "strip_size_kb": 64, 00:16:48.886 "state": "configuring", 00:16:48.886 "raid_level": "raid5f", 00:16:48.886 "superblock": false, 00:16:48.886 "num_base_bdevs": 3, 00:16:48.886 "num_base_bdevs_discovered": 2, 00:16:48.886 "num_base_bdevs_operational": 3, 00:16:48.886 "base_bdevs_list": [ 00:16:48.886 { 00:16:48.886 "name": "BaseBdev1", 00:16:48.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.886 "is_configured": false, 00:16:48.886 "data_offset": 0, 00:16:48.886 "data_size": 0 00:16:48.886 }, 00:16:48.886 { 00:16:48.886 "name": "BaseBdev2", 00:16:48.886 "uuid": "a182d088-2580-4c87-9617-97df07fdd1b2", 00:16:48.886 "is_configured": true, 00:16:48.886 "data_offset": 0, 00:16:48.886 "data_size": 65536 00:16:48.886 }, 00:16:48.886 { 00:16:48.886 "name": "BaseBdev3", 00:16:48.886 "uuid": "2cd7ab67-96cb-4200-ad6d-eb89a69f268a", 00:16:48.886 "is_configured": true, 00:16:48.886 "data_offset": 0, 00:16:48.886 "data_size": 65536 00:16:48.886 } 00:16:48.886 ] 00:16:48.886 }' 00:16:48.886 16:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.886 16:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.453 16:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:49.453 16:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.453 16:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.453 [2024-11-05 16:31:02.267590] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:49.453 16:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.453 16:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:49.453 16:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:49.453 16:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:49.453 16:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:49.453 16:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.453 16:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:49.453 16:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.453 16:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.453 16:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.454 16:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.454 16:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.454 16:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.454 16:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.454 16:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.454 16:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.454 16:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.454 "name": "Existed_Raid", 00:16:49.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.454 "strip_size_kb": 64, 00:16:49.454 "state": "configuring", 00:16:49.454 "raid_level": "raid5f", 00:16:49.454 "superblock": false, 00:16:49.454 "num_base_bdevs": 3, 00:16:49.454 "num_base_bdevs_discovered": 1, 00:16:49.454 "num_base_bdevs_operational": 3, 00:16:49.454 "base_bdevs_list": [ 00:16:49.454 { 00:16:49.454 "name": "BaseBdev1", 00:16:49.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.454 "is_configured": false, 00:16:49.454 "data_offset": 0, 00:16:49.454 "data_size": 0 00:16:49.454 }, 00:16:49.454 { 00:16:49.454 "name": null, 00:16:49.454 "uuid": "a182d088-2580-4c87-9617-97df07fdd1b2", 00:16:49.454 "is_configured": false, 00:16:49.454 "data_offset": 0, 00:16:49.454 "data_size": 65536 00:16:49.454 }, 00:16:49.454 { 00:16:49.454 "name": "BaseBdev3", 00:16:49.454 "uuid": "2cd7ab67-96cb-4200-ad6d-eb89a69f268a", 00:16:49.454 "is_configured": true, 00:16:49.454 "data_offset": 0, 00:16:49.454 "data_size": 65536 00:16:49.454 } 00:16:49.454 ] 00:16:49.454 }' 00:16:49.454 16:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.454 16:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.713 16:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.713 16:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.713 16:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.713 16:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:49.713 16:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.713 16:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:49.713 16:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:49.713 16:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.713 16:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.973 [2024-11-05 16:31:02.831194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:49.973 BaseBdev1 00:16:49.973 16:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.973 16:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:49.973 16:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:49.973 16:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:49.973 16:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:49.973 16:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:49.973 16:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:49.973 16:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:49.973 16:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.973 16:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.973 16:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.973 16:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:49.973 16:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.973 16:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.973 [ 00:16:49.973 { 00:16:49.973 "name": "BaseBdev1", 00:16:49.973 "aliases": [ 00:16:49.973 "8632acee-a154-4770-beb0-58bf27f9c228" 00:16:49.973 ], 00:16:49.973 "product_name": "Malloc disk", 00:16:49.973 "block_size": 512, 00:16:49.973 "num_blocks": 65536, 00:16:49.973 "uuid": "8632acee-a154-4770-beb0-58bf27f9c228", 00:16:49.973 "assigned_rate_limits": { 00:16:49.973 "rw_ios_per_sec": 0, 00:16:49.973 "rw_mbytes_per_sec": 0, 00:16:49.973 "r_mbytes_per_sec": 0, 00:16:49.973 "w_mbytes_per_sec": 0 00:16:49.973 }, 00:16:49.973 "claimed": true, 00:16:49.973 "claim_type": "exclusive_write", 00:16:49.973 "zoned": false, 00:16:49.973 "supported_io_types": { 00:16:49.973 "read": true, 00:16:49.973 "write": true, 00:16:49.973 "unmap": true, 00:16:49.973 "flush": true, 00:16:49.973 "reset": true, 00:16:49.973 "nvme_admin": false, 00:16:49.973 "nvme_io": false, 00:16:49.973 "nvme_io_md": false, 00:16:49.973 "write_zeroes": true, 00:16:49.973 "zcopy": true, 00:16:49.973 "get_zone_info": false, 00:16:49.973 "zone_management": false, 00:16:49.973 "zone_append": false, 00:16:49.973 "compare": false, 00:16:49.973 "compare_and_write": false, 00:16:49.973 "abort": true, 00:16:49.973 "seek_hole": false, 00:16:49.973 "seek_data": false, 00:16:49.973 "copy": true, 00:16:49.973 "nvme_iov_md": false 00:16:49.973 }, 00:16:49.973 "memory_domains": [ 00:16:49.973 { 00:16:49.973 "dma_device_id": "system", 00:16:49.973 "dma_device_type": 1 00:16:49.973 }, 00:16:49.973 { 00:16:49.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.973 "dma_device_type": 2 00:16:49.973 } 00:16:49.973 ], 00:16:49.973 "driver_specific": {} 00:16:49.973 } 00:16:49.973 ] 00:16:49.973 16:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.973 16:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:49.973 16:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:49.973 16:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:49.973 16:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:49.973 16:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:49.973 16:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.973 16:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:49.973 16:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.973 16:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.973 16:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.973 16:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.973 16:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.973 16:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.973 16:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.973 16:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.973 16:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.973 16:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.973 "name": "Existed_Raid", 00:16:49.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.973 "strip_size_kb": 64, 00:16:49.973 "state": "configuring", 00:16:49.973 "raid_level": "raid5f", 00:16:49.973 "superblock": false, 00:16:49.973 "num_base_bdevs": 3, 00:16:49.973 "num_base_bdevs_discovered": 2, 00:16:49.973 "num_base_bdevs_operational": 3, 00:16:49.973 "base_bdevs_list": [ 00:16:49.973 { 00:16:49.973 "name": "BaseBdev1", 00:16:49.974 "uuid": "8632acee-a154-4770-beb0-58bf27f9c228", 00:16:49.974 "is_configured": true, 00:16:49.974 "data_offset": 0, 00:16:49.974 "data_size": 65536 00:16:49.974 }, 00:16:49.974 { 00:16:49.974 "name": null, 00:16:49.974 "uuid": "a182d088-2580-4c87-9617-97df07fdd1b2", 00:16:49.974 "is_configured": false, 00:16:49.974 "data_offset": 0, 00:16:49.974 "data_size": 65536 00:16:49.974 }, 00:16:49.974 { 00:16:49.974 "name": "BaseBdev3", 00:16:49.974 "uuid": "2cd7ab67-96cb-4200-ad6d-eb89a69f268a", 00:16:49.974 "is_configured": true, 00:16:49.974 "data_offset": 0, 00:16:49.974 "data_size": 65536 00:16:49.974 } 00:16:49.974 ] 00:16:49.974 }' 00:16:49.974 16:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.974 16:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.544 16:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.544 16:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:50.544 16:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.544 16:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.544 16:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.544 16:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:50.544 16:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:50.544 16:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.544 16:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.544 [2024-11-05 16:31:03.414287] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:50.544 16:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.544 16:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:50.544 16:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:50.544 16:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:50.544 16:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:50.544 16:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.544 16:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:50.544 16:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.544 16:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.544 16:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.544 16:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.544 16:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.544 16:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.544 16:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.544 16:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.544 16:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.544 16:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.544 "name": "Existed_Raid", 00:16:50.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.544 "strip_size_kb": 64, 00:16:50.544 "state": "configuring", 00:16:50.544 "raid_level": "raid5f", 00:16:50.544 "superblock": false, 00:16:50.544 "num_base_bdevs": 3, 00:16:50.544 "num_base_bdevs_discovered": 1, 00:16:50.544 "num_base_bdevs_operational": 3, 00:16:50.544 "base_bdevs_list": [ 00:16:50.544 { 00:16:50.544 "name": "BaseBdev1", 00:16:50.544 "uuid": "8632acee-a154-4770-beb0-58bf27f9c228", 00:16:50.544 "is_configured": true, 00:16:50.544 "data_offset": 0, 00:16:50.544 "data_size": 65536 00:16:50.544 }, 00:16:50.544 { 00:16:50.544 "name": null, 00:16:50.544 "uuid": "a182d088-2580-4c87-9617-97df07fdd1b2", 00:16:50.544 "is_configured": false, 00:16:50.544 "data_offset": 0, 00:16:50.544 "data_size": 65536 00:16:50.544 }, 00:16:50.544 { 00:16:50.544 "name": null, 00:16:50.544 "uuid": "2cd7ab67-96cb-4200-ad6d-eb89a69f268a", 00:16:50.544 "is_configured": false, 00:16:50.544 "data_offset": 0, 00:16:50.544 "data_size": 65536 00:16:50.544 } 00:16:50.544 ] 00:16:50.544 }' 00:16:50.544 16:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.544 16:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.804 16:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:50.804 16:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.804 16:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.804 16:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.804 16:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.804 16:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:50.804 16:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:50.804 16:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.804 16:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.804 [2024-11-05 16:31:03.881571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:50.804 16:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.804 16:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:50.804 16:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:50.804 16:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:50.804 16:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:50.804 16:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.804 16:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:50.804 16:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.805 16:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.805 16:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.805 16:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.805 16:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.805 16:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.065 16:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.065 16:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.065 16:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.065 16:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.065 "name": "Existed_Raid", 00:16:51.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.065 "strip_size_kb": 64, 00:16:51.065 "state": "configuring", 00:16:51.065 "raid_level": "raid5f", 00:16:51.065 "superblock": false, 00:16:51.065 "num_base_bdevs": 3, 00:16:51.065 "num_base_bdevs_discovered": 2, 00:16:51.065 "num_base_bdevs_operational": 3, 00:16:51.065 "base_bdevs_list": [ 00:16:51.065 { 00:16:51.065 "name": "BaseBdev1", 00:16:51.065 "uuid": "8632acee-a154-4770-beb0-58bf27f9c228", 00:16:51.065 "is_configured": true, 00:16:51.065 "data_offset": 0, 00:16:51.065 "data_size": 65536 00:16:51.065 }, 00:16:51.065 { 00:16:51.065 "name": null, 00:16:51.065 "uuid": "a182d088-2580-4c87-9617-97df07fdd1b2", 00:16:51.065 "is_configured": false, 00:16:51.065 "data_offset": 0, 00:16:51.065 "data_size": 65536 00:16:51.065 }, 00:16:51.065 { 00:16:51.065 "name": "BaseBdev3", 00:16:51.065 "uuid": "2cd7ab67-96cb-4200-ad6d-eb89a69f268a", 00:16:51.065 "is_configured": true, 00:16:51.065 "data_offset": 0, 00:16:51.065 "data_size": 65536 00:16:51.065 } 00:16:51.065 ] 00:16:51.065 }' 00:16:51.065 16:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.065 16:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.325 16:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.325 16:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:51.325 16:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.325 16:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.325 16:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.325 16:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:51.325 16:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:51.325 16:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.325 16:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.325 [2024-11-05 16:31:04.364760] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:51.584 16:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.584 16:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:51.585 16:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:51.585 16:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:51.585 16:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.585 16:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.585 16:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:51.585 16:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.585 16:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.585 16:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.585 16:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.585 16:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.585 16:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.585 16:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.585 16:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.585 16:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.585 16:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.585 "name": "Existed_Raid", 00:16:51.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.585 "strip_size_kb": 64, 00:16:51.585 "state": "configuring", 00:16:51.585 "raid_level": "raid5f", 00:16:51.585 "superblock": false, 00:16:51.585 "num_base_bdevs": 3, 00:16:51.585 "num_base_bdevs_discovered": 1, 00:16:51.585 "num_base_bdevs_operational": 3, 00:16:51.585 "base_bdevs_list": [ 00:16:51.585 { 00:16:51.585 "name": null, 00:16:51.585 "uuid": "8632acee-a154-4770-beb0-58bf27f9c228", 00:16:51.585 "is_configured": false, 00:16:51.585 "data_offset": 0, 00:16:51.585 "data_size": 65536 00:16:51.585 }, 00:16:51.585 { 00:16:51.585 "name": null, 00:16:51.585 "uuid": "a182d088-2580-4c87-9617-97df07fdd1b2", 00:16:51.585 "is_configured": false, 00:16:51.585 "data_offset": 0, 00:16:51.585 "data_size": 65536 00:16:51.585 }, 00:16:51.585 { 00:16:51.585 "name": "BaseBdev3", 00:16:51.585 "uuid": "2cd7ab67-96cb-4200-ad6d-eb89a69f268a", 00:16:51.585 "is_configured": true, 00:16:51.585 "data_offset": 0, 00:16:51.585 "data_size": 65536 00:16:51.585 } 00:16:51.585 ] 00:16:51.585 }' 00:16:51.585 16:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.585 16:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.154 16:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.154 16:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.154 16:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:52.154 16:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.154 16:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.154 16:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:52.154 16:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:52.154 16:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.154 16:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.154 [2024-11-05 16:31:05.034297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:52.154 16:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.154 16:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:52.154 16:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:52.154 16:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.154 16:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:52.154 16:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.154 16:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:52.154 16:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.154 16:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.154 16:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.154 16:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.154 16:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.154 16:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.155 16:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.155 16:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.155 16:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.155 16:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.155 "name": "Existed_Raid", 00:16:52.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.155 "strip_size_kb": 64, 00:16:52.155 "state": "configuring", 00:16:52.155 "raid_level": "raid5f", 00:16:52.155 "superblock": false, 00:16:52.155 "num_base_bdevs": 3, 00:16:52.155 "num_base_bdevs_discovered": 2, 00:16:52.155 "num_base_bdevs_operational": 3, 00:16:52.155 "base_bdevs_list": [ 00:16:52.155 { 00:16:52.155 "name": null, 00:16:52.155 "uuid": "8632acee-a154-4770-beb0-58bf27f9c228", 00:16:52.155 "is_configured": false, 00:16:52.155 "data_offset": 0, 00:16:52.155 "data_size": 65536 00:16:52.155 }, 00:16:52.155 { 00:16:52.155 "name": "BaseBdev2", 00:16:52.155 "uuid": "a182d088-2580-4c87-9617-97df07fdd1b2", 00:16:52.155 "is_configured": true, 00:16:52.155 "data_offset": 0, 00:16:52.155 "data_size": 65536 00:16:52.155 }, 00:16:52.155 { 00:16:52.155 "name": "BaseBdev3", 00:16:52.155 "uuid": "2cd7ab67-96cb-4200-ad6d-eb89a69f268a", 00:16:52.155 "is_configured": true, 00:16:52.155 "data_offset": 0, 00:16:52.155 "data_size": 65536 00:16:52.155 } 00:16:52.155 ] 00:16:52.155 }' 00:16:52.155 16:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.155 16:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.414 16:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.414 16:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:52.414 16:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.414 16:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.673 16:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.673 16:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:52.673 16:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.673 16:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:52.673 16:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.673 16:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.673 16:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.673 16:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8632acee-a154-4770-beb0-58bf27f9c228 00:16:52.673 16:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.673 16:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.673 [2024-11-05 16:31:05.637724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:52.673 [2024-11-05 16:31:05.637785] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:52.673 [2024-11-05 16:31:05.637800] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:52.673 [2024-11-05 16:31:05.638082] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:52.673 [2024-11-05 16:31:05.644248] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:52.673 [2024-11-05 16:31:05.644274] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:52.673 [2024-11-05 16:31:05.644615] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:52.673 NewBaseBdev 00:16:52.673 16:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.673 16:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:52.673 16:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:16:52.673 16:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:52.673 16:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:52.673 16:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:52.673 16:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:52.673 16:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:52.673 16:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.673 16:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.673 16:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.673 16:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:52.673 16:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.673 16:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.673 [ 00:16:52.673 { 00:16:52.673 "name": "NewBaseBdev", 00:16:52.673 "aliases": [ 00:16:52.673 "8632acee-a154-4770-beb0-58bf27f9c228" 00:16:52.673 ], 00:16:52.673 "product_name": "Malloc disk", 00:16:52.673 "block_size": 512, 00:16:52.673 "num_blocks": 65536, 00:16:52.673 "uuid": "8632acee-a154-4770-beb0-58bf27f9c228", 00:16:52.673 "assigned_rate_limits": { 00:16:52.673 "rw_ios_per_sec": 0, 00:16:52.673 "rw_mbytes_per_sec": 0, 00:16:52.673 "r_mbytes_per_sec": 0, 00:16:52.673 "w_mbytes_per_sec": 0 00:16:52.673 }, 00:16:52.673 "claimed": true, 00:16:52.673 "claim_type": "exclusive_write", 00:16:52.673 "zoned": false, 00:16:52.673 "supported_io_types": { 00:16:52.673 "read": true, 00:16:52.673 "write": true, 00:16:52.673 "unmap": true, 00:16:52.673 "flush": true, 00:16:52.673 "reset": true, 00:16:52.673 "nvme_admin": false, 00:16:52.673 "nvme_io": false, 00:16:52.673 "nvme_io_md": false, 00:16:52.673 "write_zeroes": true, 00:16:52.673 "zcopy": true, 00:16:52.673 "get_zone_info": false, 00:16:52.673 "zone_management": false, 00:16:52.674 "zone_append": false, 00:16:52.674 "compare": false, 00:16:52.674 "compare_and_write": false, 00:16:52.674 "abort": true, 00:16:52.674 "seek_hole": false, 00:16:52.674 "seek_data": false, 00:16:52.674 "copy": true, 00:16:52.674 "nvme_iov_md": false 00:16:52.674 }, 00:16:52.674 "memory_domains": [ 00:16:52.674 { 00:16:52.674 "dma_device_id": "system", 00:16:52.674 "dma_device_type": 1 00:16:52.674 }, 00:16:52.674 { 00:16:52.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.674 "dma_device_type": 2 00:16:52.674 } 00:16:52.674 ], 00:16:52.674 "driver_specific": {} 00:16:52.674 } 00:16:52.674 ] 00:16:52.674 16:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.674 16:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:52.674 16:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:52.674 16:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:52.674 16:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:52.674 16:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:52.674 16:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.674 16:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:52.674 16:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.674 16:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.674 16:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.674 16:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.674 16:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.674 16:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.674 16:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.674 16:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.674 16:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.674 16:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.674 "name": "Existed_Raid", 00:16:52.674 "uuid": "33f2ec27-875f-46b8-8c7d-6758b79ec330", 00:16:52.674 "strip_size_kb": 64, 00:16:52.674 "state": "online", 00:16:52.674 "raid_level": "raid5f", 00:16:52.674 "superblock": false, 00:16:52.674 "num_base_bdevs": 3, 00:16:52.674 "num_base_bdevs_discovered": 3, 00:16:52.674 "num_base_bdevs_operational": 3, 00:16:52.674 "base_bdevs_list": [ 00:16:52.674 { 00:16:52.674 "name": "NewBaseBdev", 00:16:52.674 "uuid": "8632acee-a154-4770-beb0-58bf27f9c228", 00:16:52.674 "is_configured": true, 00:16:52.674 "data_offset": 0, 00:16:52.674 "data_size": 65536 00:16:52.674 }, 00:16:52.674 { 00:16:52.674 "name": "BaseBdev2", 00:16:52.674 "uuid": "a182d088-2580-4c87-9617-97df07fdd1b2", 00:16:52.674 "is_configured": true, 00:16:52.674 "data_offset": 0, 00:16:52.674 "data_size": 65536 00:16:52.674 }, 00:16:52.674 { 00:16:52.674 "name": "BaseBdev3", 00:16:52.674 "uuid": "2cd7ab67-96cb-4200-ad6d-eb89a69f268a", 00:16:52.674 "is_configured": true, 00:16:52.674 "data_offset": 0, 00:16:52.674 "data_size": 65536 00:16:52.674 } 00:16:52.674 ] 00:16:52.674 }' 00:16:52.674 16:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.674 16:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.243 16:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:53.243 16:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:53.243 16:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:53.243 16:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:53.243 16:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:53.243 16:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:53.243 16:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:53.243 16:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:53.243 16:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.243 16:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.243 [2024-11-05 16:31:06.127247] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:53.243 16:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.243 16:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:53.243 "name": "Existed_Raid", 00:16:53.243 "aliases": [ 00:16:53.243 "33f2ec27-875f-46b8-8c7d-6758b79ec330" 00:16:53.243 ], 00:16:53.243 "product_name": "Raid Volume", 00:16:53.243 "block_size": 512, 00:16:53.243 "num_blocks": 131072, 00:16:53.243 "uuid": "33f2ec27-875f-46b8-8c7d-6758b79ec330", 00:16:53.243 "assigned_rate_limits": { 00:16:53.243 "rw_ios_per_sec": 0, 00:16:53.243 "rw_mbytes_per_sec": 0, 00:16:53.243 "r_mbytes_per_sec": 0, 00:16:53.243 "w_mbytes_per_sec": 0 00:16:53.243 }, 00:16:53.243 "claimed": false, 00:16:53.243 "zoned": false, 00:16:53.243 "supported_io_types": { 00:16:53.243 "read": true, 00:16:53.243 "write": true, 00:16:53.243 "unmap": false, 00:16:53.243 "flush": false, 00:16:53.243 "reset": true, 00:16:53.243 "nvme_admin": false, 00:16:53.243 "nvme_io": false, 00:16:53.243 "nvme_io_md": false, 00:16:53.243 "write_zeroes": true, 00:16:53.243 "zcopy": false, 00:16:53.243 "get_zone_info": false, 00:16:53.243 "zone_management": false, 00:16:53.243 "zone_append": false, 00:16:53.243 "compare": false, 00:16:53.243 "compare_and_write": false, 00:16:53.243 "abort": false, 00:16:53.243 "seek_hole": false, 00:16:53.243 "seek_data": false, 00:16:53.243 "copy": false, 00:16:53.243 "nvme_iov_md": false 00:16:53.243 }, 00:16:53.243 "driver_specific": { 00:16:53.243 "raid": { 00:16:53.243 "uuid": "33f2ec27-875f-46b8-8c7d-6758b79ec330", 00:16:53.243 "strip_size_kb": 64, 00:16:53.243 "state": "online", 00:16:53.243 "raid_level": "raid5f", 00:16:53.243 "superblock": false, 00:16:53.243 "num_base_bdevs": 3, 00:16:53.243 "num_base_bdevs_discovered": 3, 00:16:53.243 "num_base_bdevs_operational": 3, 00:16:53.243 "base_bdevs_list": [ 00:16:53.243 { 00:16:53.243 "name": "NewBaseBdev", 00:16:53.243 "uuid": "8632acee-a154-4770-beb0-58bf27f9c228", 00:16:53.243 "is_configured": true, 00:16:53.243 "data_offset": 0, 00:16:53.243 "data_size": 65536 00:16:53.243 }, 00:16:53.243 { 00:16:53.243 "name": "BaseBdev2", 00:16:53.243 "uuid": "a182d088-2580-4c87-9617-97df07fdd1b2", 00:16:53.243 "is_configured": true, 00:16:53.243 "data_offset": 0, 00:16:53.243 "data_size": 65536 00:16:53.243 }, 00:16:53.243 { 00:16:53.243 "name": "BaseBdev3", 00:16:53.243 "uuid": "2cd7ab67-96cb-4200-ad6d-eb89a69f268a", 00:16:53.243 "is_configured": true, 00:16:53.243 "data_offset": 0, 00:16:53.243 "data_size": 65536 00:16:53.243 } 00:16:53.243 ] 00:16:53.243 } 00:16:53.243 } 00:16:53.243 }' 00:16:53.243 16:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:53.243 16:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:53.243 BaseBdev2 00:16:53.243 BaseBdev3' 00:16:53.243 16:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:53.243 16:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:53.243 16:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:53.243 16:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:53.243 16:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:53.243 16:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.243 16:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.243 16:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.243 16:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:53.243 16:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:53.243 16:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:53.243 16:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:53.243 16:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.243 16:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:53.243 16:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.501 16:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.501 16:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:53.501 16:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:53.501 16:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:53.501 16:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:53.501 16:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:53.501 16:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.501 16:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.501 16:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.501 16:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:53.501 16:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:53.501 16:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:53.501 16:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.501 16:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.501 [2024-11-05 16:31:06.422616] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:53.501 [2024-11-05 16:31:06.422651] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:53.501 [2024-11-05 16:31:06.422758] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:53.501 [2024-11-05 16:31:06.423097] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:53.501 [2024-11-05 16:31:06.423123] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:53.501 16:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.501 16:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80231 00:16:53.501 16:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 80231 ']' 00:16:53.501 16:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # kill -0 80231 00:16:53.501 16:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # uname 00:16:53.501 16:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:53.501 16:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80231 00:16:53.501 killing process with pid 80231 00:16:53.501 16:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:53.501 16:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:53.501 16:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80231' 00:16:53.501 16:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # kill 80231 00:16:53.501 16:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@976 -- # wait 80231 00:16:53.501 [2024-11-05 16:31:06.454697] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:53.759 [2024-11-05 16:31:06.757595] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:55.138 16:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:55.138 00:16:55.138 real 0m11.241s 00:16:55.138 user 0m17.840s 00:16:55.138 sys 0m1.917s 00:16:55.138 16:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:55.138 16:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.138 ************************************ 00:16:55.138 END TEST raid5f_state_function_test 00:16:55.138 ************************************ 00:16:55.138 16:31:08 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:16:55.138 16:31:08 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:55.138 16:31:08 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:55.138 16:31:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:55.138 ************************************ 00:16:55.138 START TEST raid5f_state_function_test_sb 00:16:55.138 ************************************ 00:16:55.138 16:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 3 true 00:16:55.138 16:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:55.138 16:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:16:55.138 16:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:55.138 16:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:55.138 16:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:55.138 16:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:55.138 16:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:55.138 16:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:55.138 16:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:55.138 16:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:55.138 16:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:55.138 16:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:55.138 16:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:55.138 16:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:55.138 16:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:55.138 16:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:55.138 16:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:55.138 16:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:55.138 16:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:55.138 16:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:55.138 16:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:55.138 16:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:55.138 16:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:55.138 16:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:55.138 16:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:55.138 16:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:55.138 16:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80854 00:16:55.138 16:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:55.138 16:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80854' 00:16:55.138 Process raid pid: 80854 00:16:55.139 16:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80854 00:16:55.139 16:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 80854 ']' 00:16:55.139 16:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.139 16:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:55.139 16:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.139 16:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:55.139 16:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.139 [2024-11-05 16:31:08.158010] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:16:55.139 [2024-11-05 16:31:08.158204] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:55.399 [2024-11-05 16:31:08.337272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.399 [2024-11-05 16:31:08.459053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.658 [2024-11-05 16:31:08.665412] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:55.658 [2024-11-05 16:31:08.665560] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:56.226 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:56.227 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:16:56.227 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:56.227 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.227 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.227 [2024-11-05 16:31:09.015923] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:56.227 [2024-11-05 16:31:09.016036] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:56.227 [2024-11-05 16:31:09.016053] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:56.227 [2024-11-05 16:31:09.016065] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:56.227 [2024-11-05 16:31:09.016072] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:56.227 [2024-11-05 16:31:09.016082] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:56.227 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.227 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:56.227 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:56.227 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:56.227 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:56.227 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:56.227 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:56.227 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.227 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.227 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.227 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.227 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.227 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.227 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.227 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.227 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.227 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.227 "name": "Existed_Raid", 00:16:56.227 "uuid": "a8a0055c-bba2-40e8-8fa5-4ea979f75dec", 00:16:56.227 "strip_size_kb": 64, 00:16:56.227 "state": "configuring", 00:16:56.227 "raid_level": "raid5f", 00:16:56.227 "superblock": true, 00:16:56.227 "num_base_bdevs": 3, 00:16:56.227 "num_base_bdevs_discovered": 0, 00:16:56.227 "num_base_bdevs_operational": 3, 00:16:56.227 "base_bdevs_list": [ 00:16:56.227 { 00:16:56.227 "name": "BaseBdev1", 00:16:56.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.227 "is_configured": false, 00:16:56.227 "data_offset": 0, 00:16:56.227 "data_size": 0 00:16:56.227 }, 00:16:56.227 { 00:16:56.227 "name": "BaseBdev2", 00:16:56.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.227 "is_configured": false, 00:16:56.227 "data_offset": 0, 00:16:56.227 "data_size": 0 00:16:56.227 }, 00:16:56.227 { 00:16:56.227 "name": "BaseBdev3", 00:16:56.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.227 "is_configured": false, 00:16:56.227 "data_offset": 0, 00:16:56.227 "data_size": 0 00:16:56.227 } 00:16:56.227 ] 00:16:56.227 }' 00:16:56.227 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.227 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.487 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:56.487 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.487 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.487 [2024-11-05 16:31:09.471061] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:56.487 [2024-11-05 16:31:09.471143] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:56.487 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.487 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:56.487 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.487 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.487 [2024-11-05 16:31:09.483050] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:56.487 [2024-11-05 16:31:09.483161] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:56.487 [2024-11-05 16:31:09.483193] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:56.487 [2024-11-05 16:31:09.483220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:56.487 [2024-11-05 16:31:09.483241] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:56.487 [2024-11-05 16:31:09.483265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:56.487 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.487 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:56.487 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.487 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.487 [2024-11-05 16:31:09.527650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:56.487 BaseBdev1 00:16:56.487 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.487 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:56.487 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:56.487 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:56.487 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:56.487 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:56.487 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:56.487 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:56.487 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.487 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.487 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.487 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:56.487 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.487 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.487 [ 00:16:56.487 { 00:16:56.487 "name": "BaseBdev1", 00:16:56.487 "aliases": [ 00:16:56.487 "8f53deb4-d1d1-482d-a931-05238822a827" 00:16:56.487 ], 00:16:56.487 "product_name": "Malloc disk", 00:16:56.487 "block_size": 512, 00:16:56.487 "num_blocks": 65536, 00:16:56.487 "uuid": "8f53deb4-d1d1-482d-a931-05238822a827", 00:16:56.487 "assigned_rate_limits": { 00:16:56.487 "rw_ios_per_sec": 0, 00:16:56.487 "rw_mbytes_per_sec": 0, 00:16:56.487 "r_mbytes_per_sec": 0, 00:16:56.487 "w_mbytes_per_sec": 0 00:16:56.487 }, 00:16:56.487 "claimed": true, 00:16:56.487 "claim_type": "exclusive_write", 00:16:56.487 "zoned": false, 00:16:56.487 "supported_io_types": { 00:16:56.487 "read": true, 00:16:56.487 "write": true, 00:16:56.487 "unmap": true, 00:16:56.487 "flush": true, 00:16:56.487 "reset": true, 00:16:56.487 "nvme_admin": false, 00:16:56.487 "nvme_io": false, 00:16:56.487 "nvme_io_md": false, 00:16:56.487 "write_zeroes": true, 00:16:56.487 "zcopy": true, 00:16:56.487 "get_zone_info": false, 00:16:56.487 "zone_management": false, 00:16:56.487 "zone_append": false, 00:16:56.487 "compare": false, 00:16:56.487 "compare_and_write": false, 00:16:56.487 "abort": true, 00:16:56.487 "seek_hole": false, 00:16:56.487 "seek_data": false, 00:16:56.487 "copy": true, 00:16:56.487 "nvme_iov_md": false 00:16:56.487 }, 00:16:56.487 "memory_domains": [ 00:16:56.487 { 00:16:56.487 "dma_device_id": "system", 00:16:56.487 "dma_device_type": 1 00:16:56.487 }, 00:16:56.487 { 00:16:56.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.487 "dma_device_type": 2 00:16:56.487 } 00:16:56.487 ], 00:16:56.487 "driver_specific": {} 00:16:56.487 } 00:16:56.487 ] 00:16:56.487 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.487 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:56.487 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:56.487 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:56.487 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:56.487 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:56.487 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:56.487 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:56.487 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.487 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.487 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.487 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.487 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.487 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.487 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.487 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.746 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.746 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.746 "name": "Existed_Raid", 00:16:56.746 "uuid": "64a20ff0-8159-4332-8d32-1b28030a1145", 00:16:56.746 "strip_size_kb": 64, 00:16:56.746 "state": "configuring", 00:16:56.746 "raid_level": "raid5f", 00:16:56.746 "superblock": true, 00:16:56.746 "num_base_bdevs": 3, 00:16:56.746 "num_base_bdevs_discovered": 1, 00:16:56.746 "num_base_bdevs_operational": 3, 00:16:56.746 "base_bdevs_list": [ 00:16:56.746 { 00:16:56.746 "name": "BaseBdev1", 00:16:56.746 "uuid": "8f53deb4-d1d1-482d-a931-05238822a827", 00:16:56.746 "is_configured": true, 00:16:56.746 "data_offset": 2048, 00:16:56.746 "data_size": 63488 00:16:56.746 }, 00:16:56.746 { 00:16:56.746 "name": "BaseBdev2", 00:16:56.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.746 "is_configured": false, 00:16:56.746 "data_offset": 0, 00:16:56.746 "data_size": 0 00:16:56.746 }, 00:16:56.746 { 00:16:56.746 "name": "BaseBdev3", 00:16:56.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.746 "is_configured": false, 00:16:56.746 "data_offset": 0, 00:16:56.746 "data_size": 0 00:16:56.746 } 00:16:56.746 ] 00:16:56.746 }' 00:16:56.746 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.746 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.009 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:57.009 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.009 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.009 [2024-11-05 16:31:09.954996] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:57.009 [2024-11-05 16:31:09.955101] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:57.009 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.009 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:57.009 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.009 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.009 [2024-11-05 16:31:09.963054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:57.009 [2024-11-05 16:31:09.965126] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:57.009 [2024-11-05 16:31:09.965222] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:57.009 [2024-11-05 16:31:09.965288] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:57.009 [2024-11-05 16:31:09.965341] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:57.009 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.009 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:57.009 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:57.009 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:57.009 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:57.009 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:57.009 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:57.009 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:57.009 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:57.009 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.009 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.009 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.009 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.009 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.009 16:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.009 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.009 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.009 16:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.009 16:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.009 "name": "Existed_Raid", 00:16:57.009 "uuid": "a3a0dfac-63dc-495c-be43-c74c4d89d1b2", 00:16:57.009 "strip_size_kb": 64, 00:16:57.009 "state": "configuring", 00:16:57.009 "raid_level": "raid5f", 00:16:57.009 "superblock": true, 00:16:57.009 "num_base_bdevs": 3, 00:16:57.009 "num_base_bdevs_discovered": 1, 00:16:57.009 "num_base_bdevs_operational": 3, 00:16:57.009 "base_bdevs_list": [ 00:16:57.009 { 00:16:57.009 "name": "BaseBdev1", 00:16:57.009 "uuid": "8f53deb4-d1d1-482d-a931-05238822a827", 00:16:57.009 "is_configured": true, 00:16:57.009 "data_offset": 2048, 00:16:57.009 "data_size": 63488 00:16:57.009 }, 00:16:57.009 { 00:16:57.009 "name": "BaseBdev2", 00:16:57.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.009 "is_configured": false, 00:16:57.009 "data_offset": 0, 00:16:57.009 "data_size": 0 00:16:57.009 }, 00:16:57.009 { 00:16:57.009 "name": "BaseBdev3", 00:16:57.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.009 "is_configured": false, 00:16:57.009 "data_offset": 0, 00:16:57.009 "data_size": 0 00:16:57.009 } 00:16:57.009 ] 00:16:57.009 }' 00:16:57.009 16:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.009 16:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.584 16:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:57.584 16:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.584 16:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.585 [2024-11-05 16:31:10.426674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:57.585 BaseBdev2 00:16:57.585 16:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.585 16:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:57.585 16:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:57.585 16:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:57.585 16:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:57.585 16:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:57.585 16:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:57.585 16:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:57.585 16:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.585 16:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.585 16:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.585 16:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:57.585 16:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.585 16:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.585 [ 00:16:57.585 { 00:16:57.585 "name": "BaseBdev2", 00:16:57.585 "aliases": [ 00:16:57.585 "4be68f5b-effc-49ac-90a9-770effd72daf" 00:16:57.585 ], 00:16:57.585 "product_name": "Malloc disk", 00:16:57.585 "block_size": 512, 00:16:57.585 "num_blocks": 65536, 00:16:57.585 "uuid": "4be68f5b-effc-49ac-90a9-770effd72daf", 00:16:57.585 "assigned_rate_limits": { 00:16:57.585 "rw_ios_per_sec": 0, 00:16:57.585 "rw_mbytes_per_sec": 0, 00:16:57.585 "r_mbytes_per_sec": 0, 00:16:57.585 "w_mbytes_per_sec": 0 00:16:57.585 }, 00:16:57.585 "claimed": true, 00:16:57.585 "claim_type": "exclusive_write", 00:16:57.585 "zoned": false, 00:16:57.585 "supported_io_types": { 00:16:57.585 "read": true, 00:16:57.585 "write": true, 00:16:57.585 "unmap": true, 00:16:57.585 "flush": true, 00:16:57.585 "reset": true, 00:16:57.585 "nvme_admin": false, 00:16:57.585 "nvme_io": false, 00:16:57.585 "nvme_io_md": false, 00:16:57.585 "write_zeroes": true, 00:16:57.585 "zcopy": true, 00:16:57.585 "get_zone_info": false, 00:16:57.585 "zone_management": false, 00:16:57.585 "zone_append": false, 00:16:57.585 "compare": false, 00:16:57.585 "compare_and_write": false, 00:16:57.585 "abort": true, 00:16:57.585 "seek_hole": false, 00:16:57.585 "seek_data": false, 00:16:57.585 "copy": true, 00:16:57.585 "nvme_iov_md": false 00:16:57.585 }, 00:16:57.585 "memory_domains": [ 00:16:57.585 { 00:16:57.585 "dma_device_id": "system", 00:16:57.585 "dma_device_type": 1 00:16:57.585 }, 00:16:57.585 { 00:16:57.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.585 "dma_device_type": 2 00:16:57.585 } 00:16:57.585 ], 00:16:57.585 "driver_specific": {} 00:16:57.585 } 00:16:57.585 ] 00:16:57.585 16:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.585 16:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:57.585 16:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:57.585 16:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:57.585 16:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:57.585 16:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:57.585 16:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:57.585 16:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:57.585 16:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:57.585 16:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:57.585 16:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.585 16:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.585 16:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.585 16:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.585 16:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.585 16:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.585 16:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.585 16:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.585 16:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.585 16:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.585 "name": "Existed_Raid", 00:16:57.585 "uuid": "a3a0dfac-63dc-495c-be43-c74c4d89d1b2", 00:16:57.585 "strip_size_kb": 64, 00:16:57.585 "state": "configuring", 00:16:57.585 "raid_level": "raid5f", 00:16:57.585 "superblock": true, 00:16:57.585 "num_base_bdevs": 3, 00:16:57.585 "num_base_bdevs_discovered": 2, 00:16:57.585 "num_base_bdevs_operational": 3, 00:16:57.585 "base_bdevs_list": [ 00:16:57.585 { 00:16:57.585 "name": "BaseBdev1", 00:16:57.585 "uuid": "8f53deb4-d1d1-482d-a931-05238822a827", 00:16:57.585 "is_configured": true, 00:16:57.585 "data_offset": 2048, 00:16:57.585 "data_size": 63488 00:16:57.585 }, 00:16:57.585 { 00:16:57.585 "name": "BaseBdev2", 00:16:57.585 "uuid": "4be68f5b-effc-49ac-90a9-770effd72daf", 00:16:57.585 "is_configured": true, 00:16:57.585 "data_offset": 2048, 00:16:57.585 "data_size": 63488 00:16:57.585 }, 00:16:57.585 { 00:16:57.585 "name": "BaseBdev3", 00:16:57.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.585 "is_configured": false, 00:16:57.585 "data_offset": 0, 00:16:57.585 "data_size": 0 00:16:57.585 } 00:16:57.585 ] 00:16:57.585 }' 00:16:57.585 16:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.585 16:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.845 16:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:57.845 16:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.845 16:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.105 BaseBdev3 00:16:58.105 [2024-11-05 16:31:10.965846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:58.105 [2024-11-05 16:31:10.966153] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:58.105 [2024-11-05 16:31:10.966180] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:58.105 [2024-11-05 16:31:10.966643] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:58.105 16:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.105 16:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:58.105 16:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:58.105 16:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:58.105 16:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:58.105 16:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:58.105 16:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:58.105 16:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:58.105 16:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.105 16:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.105 [2024-11-05 16:31:10.972870] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:58.105 [2024-11-05 16:31:10.972958] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:58.105 [2024-11-05 16:31:10.973157] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.105 16:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.105 16:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:58.105 16:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.105 16:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.105 [ 00:16:58.105 { 00:16:58.105 "name": "BaseBdev3", 00:16:58.105 "aliases": [ 00:16:58.105 "2b01e900-2b79-46c6-9e9c-3f06208e2cdb" 00:16:58.105 ], 00:16:58.105 "product_name": "Malloc disk", 00:16:58.105 "block_size": 512, 00:16:58.105 "num_blocks": 65536, 00:16:58.105 "uuid": "2b01e900-2b79-46c6-9e9c-3f06208e2cdb", 00:16:58.105 "assigned_rate_limits": { 00:16:58.105 "rw_ios_per_sec": 0, 00:16:58.105 "rw_mbytes_per_sec": 0, 00:16:58.105 "r_mbytes_per_sec": 0, 00:16:58.105 "w_mbytes_per_sec": 0 00:16:58.105 }, 00:16:58.105 "claimed": true, 00:16:58.105 "claim_type": "exclusive_write", 00:16:58.105 "zoned": false, 00:16:58.105 "supported_io_types": { 00:16:58.105 "read": true, 00:16:58.105 "write": true, 00:16:58.105 "unmap": true, 00:16:58.105 "flush": true, 00:16:58.105 "reset": true, 00:16:58.105 "nvme_admin": false, 00:16:58.105 "nvme_io": false, 00:16:58.105 "nvme_io_md": false, 00:16:58.105 "write_zeroes": true, 00:16:58.105 "zcopy": true, 00:16:58.105 "get_zone_info": false, 00:16:58.105 "zone_management": false, 00:16:58.105 "zone_append": false, 00:16:58.105 "compare": false, 00:16:58.105 "compare_and_write": false, 00:16:58.105 "abort": true, 00:16:58.105 "seek_hole": false, 00:16:58.105 "seek_data": false, 00:16:58.105 "copy": true, 00:16:58.105 "nvme_iov_md": false 00:16:58.105 }, 00:16:58.105 "memory_domains": [ 00:16:58.105 { 00:16:58.105 "dma_device_id": "system", 00:16:58.105 "dma_device_type": 1 00:16:58.105 }, 00:16:58.105 { 00:16:58.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.105 "dma_device_type": 2 00:16:58.105 } 00:16:58.105 ], 00:16:58.105 "driver_specific": {} 00:16:58.105 } 00:16:58.105 ] 00:16:58.105 16:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.105 16:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:58.105 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:58.105 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:58.105 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:58.105 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:58.105 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:58.105 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:58.105 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:58.105 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:58.105 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.105 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.105 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.105 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.105 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.105 16:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.105 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.105 16:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.105 16:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.105 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.105 "name": "Existed_Raid", 00:16:58.105 "uuid": "a3a0dfac-63dc-495c-be43-c74c4d89d1b2", 00:16:58.105 "strip_size_kb": 64, 00:16:58.105 "state": "online", 00:16:58.105 "raid_level": "raid5f", 00:16:58.105 "superblock": true, 00:16:58.105 "num_base_bdevs": 3, 00:16:58.105 "num_base_bdevs_discovered": 3, 00:16:58.105 "num_base_bdevs_operational": 3, 00:16:58.105 "base_bdevs_list": [ 00:16:58.105 { 00:16:58.105 "name": "BaseBdev1", 00:16:58.105 "uuid": "8f53deb4-d1d1-482d-a931-05238822a827", 00:16:58.105 "is_configured": true, 00:16:58.105 "data_offset": 2048, 00:16:58.105 "data_size": 63488 00:16:58.105 }, 00:16:58.105 { 00:16:58.105 "name": "BaseBdev2", 00:16:58.105 "uuid": "4be68f5b-effc-49ac-90a9-770effd72daf", 00:16:58.105 "is_configured": true, 00:16:58.105 "data_offset": 2048, 00:16:58.105 "data_size": 63488 00:16:58.105 }, 00:16:58.105 { 00:16:58.105 "name": "BaseBdev3", 00:16:58.105 "uuid": "2b01e900-2b79-46c6-9e9c-3f06208e2cdb", 00:16:58.105 "is_configured": true, 00:16:58.105 "data_offset": 2048, 00:16:58.105 "data_size": 63488 00:16:58.105 } 00:16:58.105 ] 00:16:58.105 }' 00:16:58.105 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.105 16:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:58.673 [2024-11-05 16:31:11.472094] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:58.673 "name": "Existed_Raid", 00:16:58.673 "aliases": [ 00:16:58.673 "a3a0dfac-63dc-495c-be43-c74c4d89d1b2" 00:16:58.673 ], 00:16:58.673 "product_name": "Raid Volume", 00:16:58.673 "block_size": 512, 00:16:58.673 "num_blocks": 126976, 00:16:58.673 "uuid": "a3a0dfac-63dc-495c-be43-c74c4d89d1b2", 00:16:58.673 "assigned_rate_limits": { 00:16:58.673 "rw_ios_per_sec": 0, 00:16:58.673 "rw_mbytes_per_sec": 0, 00:16:58.673 "r_mbytes_per_sec": 0, 00:16:58.673 "w_mbytes_per_sec": 0 00:16:58.673 }, 00:16:58.673 "claimed": false, 00:16:58.673 "zoned": false, 00:16:58.673 "supported_io_types": { 00:16:58.673 "read": true, 00:16:58.673 "write": true, 00:16:58.673 "unmap": false, 00:16:58.673 "flush": false, 00:16:58.673 "reset": true, 00:16:58.673 "nvme_admin": false, 00:16:58.673 "nvme_io": false, 00:16:58.673 "nvme_io_md": false, 00:16:58.673 "write_zeroes": true, 00:16:58.673 "zcopy": false, 00:16:58.673 "get_zone_info": false, 00:16:58.673 "zone_management": false, 00:16:58.673 "zone_append": false, 00:16:58.673 "compare": false, 00:16:58.673 "compare_and_write": false, 00:16:58.673 "abort": false, 00:16:58.673 "seek_hole": false, 00:16:58.673 "seek_data": false, 00:16:58.673 "copy": false, 00:16:58.673 "nvme_iov_md": false 00:16:58.673 }, 00:16:58.673 "driver_specific": { 00:16:58.673 "raid": { 00:16:58.673 "uuid": "a3a0dfac-63dc-495c-be43-c74c4d89d1b2", 00:16:58.673 "strip_size_kb": 64, 00:16:58.673 "state": "online", 00:16:58.673 "raid_level": "raid5f", 00:16:58.673 "superblock": true, 00:16:58.673 "num_base_bdevs": 3, 00:16:58.673 "num_base_bdevs_discovered": 3, 00:16:58.673 "num_base_bdevs_operational": 3, 00:16:58.673 "base_bdevs_list": [ 00:16:58.673 { 00:16:58.673 "name": "BaseBdev1", 00:16:58.673 "uuid": "8f53deb4-d1d1-482d-a931-05238822a827", 00:16:58.673 "is_configured": true, 00:16:58.673 "data_offset": 2048, 00:16:58.673 "data_size": 63488 00:16:58.673 }, 00:16:58.673 { 00:16:58.673 "name": "BaseBdev2", 00:16:58.673 "uuid": "4be68f5b-effc-49ac-90a9-770effd72daf", 00:16:58.673 "is_configured": true, 00:16:58.673 "data_offset": 2048, 00:16:58.673 "data_size": 63488 00:16:58.673 }, 00:16:58.673 { 00:16:58.673 "name": "BaseBdev3", 00:16:58.673 "uuid": "2b01e900-2b79-46c6-9e9c-3f06208e2cdb", 00:16:58.673 "is_configured": true, 00:16:58.673 "data_offset": 2048, 00:16:58.673 "data_size": 63488 00:16:58.673 } 00:16:58.673 ] 00:16:58.673 } 00:16:58.673 } 00:16:58.673 }' 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:58.673 BaseBdev2 00:16:58.673 BaseBdev3' 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.673 16:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.673 [2024-11-05 16:31:11.743417] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:58.932 16:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.933 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:58.933 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:58.933 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:58.933 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:58.933 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:58.933 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:16:58.933 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:58.933 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:58.933 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:58.933 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:58.933 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:58.933 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.933 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.933 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.933 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.933 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.933 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.933 16:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.933 16:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.933 16:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.933 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.933 "name": "Existed_Raid", 00:16:58.933 "uuid": "a3a0dfac-63dc-495c-be43-c74c4d89d1b2", 00:16:58.933 "strip_size_kb": 64, 00:16:58.933 "state": "online", 00:16:58.933 "raid_level": "raid5f", 00:16:58.933 "superblock": true, 00:16:58.933 "num_base_bdevs": 3, 00:16:58.933 "num_base_bdevs_discovered": 2, 00:16:58.933 "num_base_bdevs_operational": 2, 00:16:58.933 "base_bdevs_list": [ 00:16:58.933 { 00:16:58.933 "name": null, 00:16:58.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.933 "is_configured": false, 00:16:58.933 "data_offset": 0, 00:16:58.933 "data_size": 63488 00:16:58.933 }, 00:16:58.933 { 00:16:58.933 "name": "BaseBdev2", 00:16:58.933 "uuid": "4be68f5b-effc-49ac-90a9-770effd72daf", 00:16:58.933 "is_configured": true, 00:16:58.933 "data_offset": 2048, 00:16:58.933 "data_size": 63488 00:16:58.933 }, 00:16:58.933 { 00:16:58.933 "name": "BaseBdev3", 00:16:58.933 "uuid": "2b01e900-2b79-46c6-9e9c-3f06208e2cdb", 00:16:58.933 "is_configured": true, 00:16:58.933 "data_offset": 2048, 00:16:58.933 "data_size": 63488 00:16:58.933 } 00:16:58.933 ] 00:16:58.933 }' 00:16:58.933 16:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.933 16:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.502 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:59.502 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:59.502 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.502 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:59.502 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.502 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.502 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.502 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:59.502 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:59.502 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:59.502 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.502 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.502 [2024-11-05 16:31:12.406955] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:59.502 [2024-11-05 16:31:12.407116] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:59.502 [2024-11-05 16:31:12.524080] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:59.502 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.502 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:59.502 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:59.502 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.502 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.502 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:59.502 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.502 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.502 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:59.502 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:59.502 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:59.502 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.502 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.502 [2024-11-05 16:31:12.584022] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:59.502 [2024-11-05 16:31:12.584143] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:59.761 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.761 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:59.761 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:59.761 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.761 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:59.761 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.761 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.761 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.761 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:59.761 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:59.761 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:16:59.761 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:59.761 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:59.761 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:59.761 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.761 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.761 BaseBdev2 00:16:59.761 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.761 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:59.761 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:59.761 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:59.761 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:59.761 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:59.761 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:59.761 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:59.761 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.761 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.761 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.762 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:59.762 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.762 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.762 [ 00:16:59.762 { 00:16:59.762 "name": "BaseBdev2", 00:16:59.762 "aliases": [ 00:16:59.762 "64d99079-7f0a-4aaa-93e7-a6c95f2e4c3c" 00:16:59.762 ], 00:16:59.762 "product_name": "Malloc disk", 00:16:59.762 "block_size": 512, 00:16:59.762 "num_blocks": 65536, 00:16:59.762 "uuid": "64d99079-7f0a-4aaa-93e7-a6c95f2e4c3c", 00:16:59.762 "assigned_rate_limits": { 00:16:59.762 "rw_ios_per_sec": 0, 00:16:59.762 "rw_mbytes_per_sec": 0, 00:16:59.762 "r_mbytes_per_sec": 0, 00:16:59.762 "w_mbytes_per_sec": 0 00:16:59.762 }, 00:16:59.762 "claimed": false, 00:16:59.762 "zoned": false, 00:16:59.762 "supported_io_types": { 00:16:59.762 "read": true, 00:16:59.762 "write": true, 00:16:59.762 "unmap": true, 00:16:59.762 "flush": true, 00:16:59.762 "reset": true, 00:16:59.762 "nvme_admin": false, 00:16:59.762 "nvme_io": false, 00:16:59.762 "nvme_io_md": false, 00:16:59.762 "write_zeroes": true, 00:16:59.762 "zcopy": true, 00:16:59.762 "get_zone_info": false, 00:16:59.762 "zone_management": false, 00:16:59.762 "zone_append": false, 00:16:59.762 "compare": false, 00:16:59.762 "compare_and_write": false, 00:16:59.762 "abort": true, 00:16:59.762 "seek_hole": false, 00:16:59.762 "seek_data": false, 00:16:59.762 "copy": true, 00:16:59.762 "nvme_iov_md": false 00:16:59.762 }, 00:16:59.762 "memory_domains": [ 00:16:59.762 { 00:16:59.762 "dma_device_id": "system", 00:16:59.762 "dma_device_type": 1 00:16:59.762 }, 00:16:59.762 { 00:16:59.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.762 "dma_device_type": 2 00:16:59.762 } 00:16:59.762 ], 00:16:59.762 "driver_specific": {} 00:16:59.762 } 00:16:59.762 ] 00:16:59.762 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.762 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:59.762 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:59.762 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:59.762 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:59.762 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.762 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.021 BaseBdev3 00:17:00.021 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.021 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:00.021 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:00.021 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:00.021 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:00.021 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:00.021 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:00.021 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:00.021 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.021 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.021 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.021 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:00.021 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.021 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.021 [ 00:17:00.021 { 00:17:00.021 "name": "BaseBdev3", 00:17:00.021 "aliases": [ 00:17:00.021 "7477050c-6d20-4944-b0d8-56bb3adb0f5a" 00:17:00.021 ], 00:17:00.021 "product_name": "Malloc disk", 00:17:00.021 "block_size": 512, 00:17:00.021 "num_blocks": 65536, 00:17:00.021 "uuid": "7477050c-6d20-4944-b0d8-56bb3adb0f5a", 00:17:00.021 "assigned_rate_limits": { 00:17:00.021 "rw_ios_per_sec": 0, 00:17:00.021 "rw_mbytes_per_sec": 0, 00:17:00.021 "r_mbytes_per_sec": 0, 00:17:00.021 "w_mbytes_per_sec": 0 00:17:00.021 }, 00:17:00.021 "claimed": false, 00:17:00.021 "zoned": false, 00:17:00.021 "supported_io_types": { 00:17:00.021 "read": true, 00:17:00.021 "write": true, 00:17:00.021 "unmap": true, 00:17:00.021 "flush": true, 00:17:00.021 "reset": true, 00:17:00.021 "nvme_admin": false, 00:17:00.021 "nvme_io": false, 00:17:00.021 "nvme_io_md": false, 00:17:00.021 "write_zeroes": true, 00:17:00.021 "zcopy": true, 00:17:00.021 "get_zone_info": false, 00:17:00.021 "zone_management": false, 00:17:00.021 "zone_append": false, 00:17:00.021 "compare": false, 00:17:00.021 "compare_and_write": false, 00:17:00.021 "abort": true, 00:17:00.021 "seek_hole": false, 00:17:00.021 "seek_data": false, 00:17:00.021 "copy": true, 00:17:00.021 "nvme_iov_md": false 00:17:00.021 }, 00:17:00.021 "memory_domains": [ 00:17:00.021 { 00:17:00.021 "dma_device_id": "system", 00:17:00.021 "dma_device_type": 1 00:17:00.021 }, 00:17:00.021 { 00:17:00.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.021 "dma_device_type": 2 00:17:00.021 } 00:17:00.021 ], 00:17:00.021 "driver_specific": {} 00:17:00.021 } 00:17:00.021 ] 00:17:00.021 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.021 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:00.021 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:00.021 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:00.021 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:00.021 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.021 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.021 [2024-11-05 16:31:12.944728] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:00.021 [2024-11-05 16:31:12.944778] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:00.021 [2024-11-05 16:31:12.944806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:00.021 [2024-11-05 16:31:12.946927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:00.021 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.021 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:00.021 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:00.021 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:00.021 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:00.021 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.021 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:00.021 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.021 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.021 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.021 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.021 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.021 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.021 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.021 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.021 16:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.021 16:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.021 "name": "Existed_Raid", 00:17:00.021 "uuid": "152f49fe-4894-4c84-a58d-fb9b81de5b88", 00:17:00.021 "strip_size_kb": 64, 00:17:00.021 "state": "configuring", 00:17:00.021 "raid_level": "raid5f", 00:17:00.021 "superblock": true, 00:17:00.021 "num_base_bdevs": 3, 00:17:00.021 "num_base_bdevs_discovered": 2, 00:17:00.021 "num_base_bdevs_operational": 3, 00:17:00.021 "base_bdevs_list": [ 00:17:00.021 { 00:17:00.021 "name": "BaseBdev1", 00:17:00.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.021 "is_configured": false, 00:17:00.021 "data_offset": 0, 00:17:00.021 "data_size": 0 00:17:00.021 }, 00:17:00.021 { 00:17:00.021 "name": "BaseBdev2", 00:17:00.021 "uuid": "64d99079-7f0a-4aaa-93e7-a6c95f2e4c3c", 00:17:00.021 "is_configured": true, 00:17:00.021 "data_offset": 2048, 00:17:00.021 "data_size": 63488 00:17:00.021 }, 00:17:00.021 { 00:17:00.021 "name": "BaseBdev3", 00:17:00.021 "uuid": "7477050c-6d20-4944-b0d8-56bb3adb0f5a", 00:17:00.021 "is_configured": true, 00:17:00.021 "data_offset": 2048, 00:17:00.021 "data_size": 63488 00:17:00.021 } 00:17:00.021 ] 00:17:00.021 }' 00:17:00.021 16:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.021 16:31:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.589 16:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:00.589 16:31:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.589 16:31:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.589 [2024-11-05 16:31:13.419974] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:00.589 16:31:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.589 16:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:00.589 16:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:00.589 16:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:00.589 16:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:00.589 16:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.589 16:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:00.589 16:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.589 16:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.589 16:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.589 16:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.589 16:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.589 16:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.589 16:31:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.589 16:31:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.589 16:31:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.589 16:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.589 "name": "Existed_Raid", 00:17:00.589 "uuid": "152f49fe-4894-4c84-a58d-fb9b81de5b88", 00:17:00.589 "strip_size_kb": 64, 00:17:00.589 "state": "configuring", 00:17:00.589 "raid_level": "raid5f", 00:17:00.589 "superblock": true, 00:17:00.589 "num_base_bdevs": 3, 00:17:00.589 "num_base_bdevs_discovered": 1, 00:17:00.589 "num_base_bdevs_operational": 3, 00:17:00.589 "base_bdevs_list": [ 00:17:00.589 { 00:17:00.589 "name": "BaseBdev1", 00:17:00.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.589 "is_configured": false, 00:17:00.589 "data_offset": 0, 00:17:00.589 "data_size": 0 00:17:00.589 }, 00:17:00.589 { 00:17:00.589 "name": null, 00:17:00.589 "uuid": "64d99079-7f0a-4aaa-93e7-a6c95f2e4c3c", 00:17:00.589 "is_configured": false, 00:17:00.589 "data_offset": 0, 00:17:00.589 "data_size": 63488 00:17:00.589 }, 00:17:00.589 { 00:17:00.589 "name": "BaseBdev3", 00:17:00.589 "uuid": "7477050c-6d20-4944-b0d8-56bb3adb0f5a", 00:17:00.589 "is_configured": true, 00:17:00.589 "data_offset": 2048, 00:17:00.589 "data_size": 63488 00:17:00.589 } 00:17:00.589 ] 00:17:00.589 }' 00:17:00.589 16:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.589 16:31:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.849 16:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.849 16:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:00.849 16:31:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.849 16:31:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.849 16:31:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.107 16:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:01.107 16:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:01.107 16:31:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.107 16:31:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.107 [2024-11-05 16:31:14.010245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:01.107 BaseBdev1 00:17:01.107 16:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.107 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:01.107 16:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:01.107 16:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:01.107 16:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:01.107 16:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:01.107 16:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:01.107 16:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:01.107 16:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.107 16:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.107 16:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.107 16:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:01.107 16:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.107 16:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.107 [ 00:17:01.107 { 00:17:01.107 "name": "BaseBdev1", 00:17:01.107 "aliases": [ 00:17:01.107 "3b8b5e15-ef48-43af-ac0f-753e96faf7b5" 00:17:01.107 ], 00:17:01.107 "product_name": "Malloc disk", 00:17:01.107 "block_size": 512, 00:17:01.107 "num_blocks": 65536, 00:17:01.107 "uuid": "3b8b5e15-ef48-43af-ac0f-753e96faf7b5", 00:17:01.107 "assigned_rate_limits": { 00:17:01.107 "rw_ios_per_sec": 0, 00:17:01.107 "rw_mbytes_per_sec": 0, 00:17:01.107 "r_mbytes_per_sec": 0, 00:17:01.107 "w_mbytes_per_sec": 0 00:17:01.107 }, 00:17:01.107 "claimed": true, 00:17:01.107 "claim_type": "exclusive_write", 00:17:01.107 "zoned": false, 00:17:01.107 "supported_io_types": { 00:17:01.107 "read": true, 00:17:01.107 "write": true, 00:17:01.107 "unmap": true, 00:17:01.107 "flush": true, 00:17:01.107 "reset": true, 00:17:01.107 "nvme_admin": false, 00:17:01.107 "nvme_io": false, 00:17:01.107 "nvme_io_md": false, 00:17:01.107 "write_zeroes": true, 00:17:01.107 "zcopy": true, 00:17:01.107 "get_zone_info": false, 00:17:01.107 "zone_management": false, 00:17:01.107 "zone_append": false, 00:17:01.107 "compare": false, 00:17:01.107 "compare_and_write": false, 00:17:01.107 "abort": true, 00:17:01.107 "seek_hole": false, 00:17:01.107 "seek_data": false, 00:17:01.107 "copy": true, 00:17:01.107 "nvme_iov_md": false 00:17:01.107 }, 00:17:01.107 "memory_domains": [ 00:17:01.107 { 00:17:01.107 "dma_device_id": "system", 00:17:01.107 "dma_device_type": 1 00:17:01.107 }, 00:17:01.107 { 00:17:01.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.107 "dma_device_type": 2 00:17:01.107 } 00:17:01.107 ], 00:17:01.107 "driver_specific": {} 00:17:01.107 } 00:17:01.107 ] 00:17:01.107 16:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.107 16:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:01.107 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:01.107 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:01.107 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:01.107 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:01.107 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.107 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:01.107 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.107 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.107 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.107 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.107 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.107 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.107 16:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.107 16:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.107 16:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.107 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.107 "name": "Existed_Raid", 00:17:01.107 "uuid": "152f49fe-4894-4c84-a58d-fb9b81de5b88", 00:17:01.107 "strip_size_kb": 64, 00:17:01.107 "state": "configuring", 00:17:01.107 "raid_level": "raid5f", 00:17:01.107 "superblock": true, 00:17:01.107 "num_base_bdevs": 3, 00:17:01.107 "num_base_bdevs_discovered": 2, 00:17:01.107 "num_base_bdevs_operational": 3, 00:17:01.107 "base_bdevs_list": [ 00:17:01.107 { 00:17:01.107 "name": "BaseBdev1", 00:17:01.107 "uuid": "3b8b5e15-ef48-43af-ac0f-753e96faf7b5", 00:17:01.107 "is_configured": true, 00:17:01.107 "data_offset": 2048, 00:17:01.107 "data_size": 63488 00:17:01.107 }, 00:17:01.107 { 00:17:01.107 "name": null, 00:17:01.108 "uuid": "64d99079-7f0a-4aaa-93e7-a6c95f2e4c3c", 00:17:01.108 "is_configured": false, 00:17:01.108 "data_offset": 0, 00:17:01.108 "data_size": 63488 00:17:01.108 }, 00:17:01.108 { 00:17:01.108 "name": "BaseBdev3", 00:17:01.108 "uuid": "7477050c-6d20-4944-b0d8-56bb3adb0f5a", 00:17:01.108 "is_configured": true, 00:17:01.108 "data_offset": 2048, 00:17:01.108 "data_size": 63488 00:17:01.108 } 00:17:01.108 ] 00:17:01.108 }' 00:17:01.108 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.108 16:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.415 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.415 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:01.415 16:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.415 16:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.415 16:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.415 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:01.415 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:01.415 16:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.415 16:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.415 [2024-11-05 16:31:14.457625] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:01.415 16:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.415 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:01.415 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:01.415 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:01.415 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:01.415 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.415 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:01.415 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.415 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.415 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.415 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.415 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.415 16:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.415 16:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.415 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.697 16:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.697 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.697 "name": "Existed_Raid", 00:17:01.697 "uuid": "152f49fe-4894-4c84-a58d-fb9b81de5b88", 00:17:01.697 "strip_size_kb": 64, 00:17:01.697 "state": "configuring", 00:17:01.697 "raid_level": "raid5f", 00:17:01.697 "superblock": true, 00:17:01.697 "num_base_bdevs": 3, 00:17:01.697 "num_base_bdevs_discovered": 1, 00:17:01.697 "num_base_bdevs_operational": 3, 00:17:01.697 "base_bdevs_list": [ 00:17:01.697 { 00:17:01.697 "name": "BaseBdev1", 00:17:01.697 "uuid": "3b8b5e15-ef48-43af-ac0f-753e96faf7b5", 00:17:01.697 "is_configured": true, 00:17:01.697 "data_offset": 2048, 00:17:01.697 "data_size": 63488 00:17:01.697 }, 00:17:01.697 { 00:17:01.697 "name": null, 00:17:01.697 "uuid": "64d99079-7f0a-4aaa-93e7-a6c95f2e4c3c", 00:17:01.697 "is_configured": false, 00:17:01.697 "data_offset": 0, 00:17:01.697 "data_size": 63488 00:17:01.697 }, 00:17:01.697 { 00:17:01.697 "name": null, 00:17:01.697 "uuid": "7477050c-6d20-4944-b0d8-56bb3adb0f5a", 00:17:01.697 "is_configured": false, 00:17:01.697 "data_offset": 0, 00:17:01.697 "data_size": 63488 00:17:01.697 } 00:17:01.697 ] 00:17:01.697 }' 00:17:01.697 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.697 16:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.956 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.956 16:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.956 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:01.956 16:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.956 16:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.956 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:01.956 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:01.956 16:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.956 16:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.956 [2024-11-05 16:31:14.864975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:01.956 16:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.956 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:01.956 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:01.956 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:01.956 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:01.956 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.956 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:01.956 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.956 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.956 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.956 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.956 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.956 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.956 16:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.956 16:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.956 16:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.956 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.956 "name": "Existed_Raid", 00:17:01.956 "uuid": "152f49fe-4894-4c84-a58d-fb9b81de5b88", 00:17:01.956 "strip_size_kb": 64, 00:17:01.956 "state": "configuring", 00:17:01.956 "raid_level": "raid5f", 00:17:01.956 "superblock": true, 00:17:01.956 "num_base_bdevs": 3, 00:17:01.956 "num_base_bdevs_discovered": 2, 00:17:01.956 "num_base_bdevs_operational": 3, 00:17:01.956 "base_bdevs_list": [ 00:17:01.956 { 00:17:01.956 "name": "BaseBdev1", 00:17:01.956 "uuid": "3b8b5e15-ef48-43af-ac0f-753e96faf7b5", 00:17:01.956 "is_configured": true, 00:17:01.956 "data_offset": 2048, 00:17:01.956 "data_size": 63488 00:17:01.956 }, 00:17:01.956 { 00:17:01.956 "name": null, 00:17:01.956 "uuid": "64d99079-7f0a-4aaa-93e7-a6c95f2e4c3c", 00:17:01.956 "is_configured": false, 00:17:01.956 "data_offset": 0, 00:17:01.956 "data_size": 63488 00:17:01.956 }, 00:17:01.956 { 00:17:01.956 "name": "BaseBdev3", 00:17:01.956 "uuid": "7477050c-6d20-4944-b0d8-56bb3adb0f5a", 00:17:01.956 "is_configured": true, 00:17:01.956 "data_offset": 2048, 00:17:01.956 "data_size": 63488 00:17:01.956 } 00:17:01.956 ] 00:17:01.956 }' 00:17:01.956 16:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.956 16:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.214 16:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.214 16:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:02.214 16:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.214 16:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.214 16:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.214 16:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:02.214 16:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:02.215 16:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.215 16:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.215 [2024-11-05 16:31:15.263692] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:02.472 16:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.472 16:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:02.472 16:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:02.473 16:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:02.473 16:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:02.473 16:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:02.473 16:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:02.473 16:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.473 16:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.473 16:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.473 16:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.473 16:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.473 16:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.473 16:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.473 16:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.473 16:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.473 16:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.473 "name": "Existed_Raid", 00:17:02.473 "uuid": "152f49fe-4894-4c84-a58d-fb9b81de5b88", 00:17:02.473 "strip_size_kb": 64, 00:17:02.473 "state": "configuring", 00:17:02.473 "raid_level": "raid5f", 00:17:02.473 "superblock": true, 00:17:02.473 "num_base_bdevs": 3, 00:17:02.473 "num_base_bdevs_discovered": 1, 00:17:02.473 "num_base_bdevs_operational": 3, 00:17:02.473 "base_bdevs_list": [ 00:17:02.473 { 00:17:02.473 "name": null, 00:17:02.473 "uuid": "3b8b5e15-ef48-43af-ac0f-753e96faf7b5", 00:17:02.473 "is_configured": false, 00:17:02.473 "data_offset": 0, 00:17:02.473 "data_size": 63488 00:17:02.473 }, 00:17:02.473 { 00:17:02.473 "name": null, 00:17:02.473 "uuid": "64d99079-7f0a-4aaa-93e7-a6c95f2e4c3c", 00:17:02.473 "is_configured": false, 00:17:02.473 "data_offset": 0, 00:17:02.473 "data_size": 63488 00:17:02.473 }, 00:17:02.473 { 00:17:02.473 "name": "BaseBdev3", 00:17:02.473 "uuid": "7477050c-6d20-4944-b0d8-56bb3adb0f5a", 00:17:02.473 "is_configured": true, 00:17:02.473 "data_offset": 2048, 00:17:02.473 "data_size": 63488 00:17:02.473 } 00:17:02.473 ] 00:17:02.473 }' 00:17:02.473 16:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.473 16:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.731 16:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.731 16:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.731 16:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.731 16:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:02.731 16:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.731 16:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:02.731 16:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:02.731 16:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.731 16:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.731 [2024-11-05 16:31:15.752558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:02.731 16:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.731 16:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:02.731 16:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:02.731 16:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:02.731 16:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:02.731 16:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:02.731 16:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:02.731 16:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.731 16:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.731 16:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.731 16:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.731 16:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.731 16:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.731 16:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.731 16:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.731 16:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.731 16:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.731 "name": "Existed_Raid", 00:17:02.731 "uuid": "152f49fe-4894-4c84-a58d-fb9b81de5b88", 00:17:02.731 "strip_size_kb": 64, 00:17:02.731 "state": "configuring", 00:17:02.731 "raid_level": "raid5f", 00:17:02.731 "superblock": true, 00:17:02.731 "num_base_bdevs": 3, 00:17:02.731 "num_base_bdevs_discovered": 2, 00:17:02.731 "num_base_bdevs_operational": 3, 00:17:02.731 "base_bdevs_list": [ 00:17:02.731 { 00:17:02.731 "name": null, 00:17:02.731 "uuid": "3b8b5e15-ef48-43af-ac0f-753e96faf7b5", 00:17:02.731 "is_configured": false, 00:17:02.731 "data_offset": 0, 00:17:02.731 "data_size": 63488 00:17:02.731 }, 00:17:02.731 { 00:17:02.731 "name": "BaseBdev2", 00:17:02.731 "uuid": "64d99079-7f0a-4aaa-93e7-a6c95f2e4c3c", 00:17:02.731 "is_configured": true, 00:17:02.731 "data_offset": 2048, 00:17:02.731 "data_size": 63488 00:17:02.731 }, 00:17:02.731 { 00:17:02.731 "name": "BaseBdev3", 00:17:02.731 "uuid": "7477050c-6d20-4944-b0d8-56bb3adb0f5a", 00:17:02.731 "is_configured": true, 00:17:02.731 "data_offset": 2048, 00:17:02.731 "data_size": 63488 00:17:02.731 } 00:17:02.731 ] 00:17:02.731 }' 00:17:02.731 16:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.731 16:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3b8b5e15-ef48-43af-ac0f-753e96faf7b5 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.297 [2024-11-05 16:31:16.243714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:03.297 [2024-11-05 16:31:16.244040] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:03.297 [2024-11-05 16:31:16.244100] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:03.297 NewBaseBdev 00:17:03.297 [2024-11-05 16:31:16.244440] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.297 [2024-11-05 16:31:16.250720] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:03.297 [2024-11-05 16:31:16.250799] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:03.297 [2024-11-05 16:31:16.251195] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.297 [ 00:17:03.297 { 00:17:03.297 "name": "NewBaseBdev", 00:17:03.297 "aliases": [ 00:17:03.297 "3b8b5e15-ef48-43af-ac0f-753e96faf7b5" 00:17:03.297 ], 00:17:03.297 "product_name": "Malloc disk", 00:17:03.297 "block_size": 512, 00:17:03.297 "num_blocks": 65536, 00:17:03.297 "uuid": "3b8b5e15-ef48-43af-ac0f-753e96faf7b5", 00:17:03.297 "assigned_rate_limits": { 00:17:03.297 "rw_ios_per_sec": 0, 00:17:03.297 "rw_mbytes_per_sec": 0, 00:17:03.297 "r_mbytes_per_sec": 0, 00:17:03.297 "w_mbytes_per_sec": 0 00:17:03.297 }, 00:17:03.297 "claimed": true, 00:17:03.297 "claim_type": "exclusive_write", 00:17:03.297 "zoned": false, 00:17:03.297 "supported_io_types": { 00:17:03.297 "read": true, 00:17:03.297 "write": true, 00:17:03.297 "unmap": true, 00:17:03.297 "flush": true, 00:17:03.297 "reset": true, 00:17:03.297 "nvme_admin": false, 00:17:03.297 "nvme_io": false, 00:17:03.297 "nvme_io_md": false, 00:17:03.297 "write_zeroes": true, 00:17:03.297 "zcopy": true, 00:17:03.297 "get_zone_info": false, 00:17:03.297 "zone_management": false, 00:17:03.297 "zone_append": false, 00:17:03.297 "compare": false, 00:17:03.297 "compare_and_write": false, 00:17:03.297 "abort": true, 00:17:03.297 "seek_hole": false, 00:17:03.297 "seek_data": false, 00:17:03.297 "copy": true, 00:17:03.297 "nvme_iov_md": false 00:17:03.297 }, 00:17:03.297 "memory_domains": [ 00:17:03.297 { 00:17:03.297 "dma_device_id": "system", 00:17:03.297 "dma_device_type": 1 00:17:03.297 }, 00:17:03.297 { 00:17:03.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.297 "dma_device_type": 2 00:17:03.297 } 00:17:03.297 ], 00:17:03.297 "driver_specific": {} 00:17:03.297 } 00:17:03.297 ] 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.297 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.298 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.298 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.298 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.298 "name": "Existed_Raid", 00:17:03.298 "uuid": "152f49fe-4894-4c84-a58d-fb9b81de5b88", 00:17:03.298 "strip_size_kb": 64, 00:17:03.298 "state": "online", 00:17:03.298 "raid_level": "raid5f", 00:17:03.298 "superblock": true, 00:17:03.298 "num_base_bdevs": 3, 00:17:03.298 "num_base_bdevs_discovered": 3, 00:17:03.298 "num_base_bdevs_operational": 3, 00:17:03.298 "base_bdevs_list": [ 00:17:03.298 { 00:17:03.298 "name": "NewBaseBdev", 00:17:03.298 "uuid": "3b8b5e15-ef48-43af-ac0f-753e96faf7b5", 00:17:03.298 "is_configured": true, 00:17:03.298 "data_offset": 2048, 00:17:03.298 "data_size": 63488 00:17:03.298 }, 00:17:03.298 { 00:17:03.298 "name": "BaseBdev2", 00:17:03.298 "uuid": "64d99079-7f0a-4aaa-93e7-a6c95f2e4c3c", 00:17:03.298 "is_configured": true, 00:17:03.298 "data_offset": 2048, 00:17:03.298 "data_size": 63488 00:17:03.298 }, 00:17:03.298 { 00:17:03.298 "name": "BaseBdev3", 00:17:03.298 "uuid": "7477050c-6d20-4944-b0d8-56bb3adb0f5a", 00:17:03.298 "is_configured": true, 00:17:03.298 "data_offset": 2048, 00:17:03.298 "data_size": 63488 00:17:03.298 } 00:17:03.298 ] 00:17:03.298 }' 00:17:03.298 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.298 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.863 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:03.863 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:03.863 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:03.863 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:03.863 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:03.863 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:03.863 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:03.863 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:03.863 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.863 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.863 [2024-11-05 16:31:16.660977] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:03.863 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.863 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:03.863 "name": "Existed_Raid", 00:17:03.863 "aliases": [ 00:17:03.863 "152f49fe-4894-4c84-a58d-fb9b81de5b88" 00:17:03.863 ], 00:17:03.863 "product_name": "Raid Volume", 00:17:03.863 "block_size": 512, 00:17:03.863 "num_blocks": 126976, 00:17:03.863 "uuid": "152f49fe-4894-4c84-a58d-fb9b81de5b88", 00:17:03.863 "assigned_rate_limits": { 00:17:03.863 "rw_ios_per_sec": 0, 00:17:03.863 "rw_mbytes_per_sec": 0, 00:17:03.863 "r_mbytes_per_sec": 0, 00:17:03.863 "w_mbytes_per_sec": 0 00:17:03.863 }, 00:17:03.863 "claimed": false, 00:17:03.863 "zoned": false, 00:17:03.863 "supported_io_types": { 00:17:03.863 "read": true, 00:17:03.863 "write": true, 00:17:03.863 "unmap": false, 00:17:03.863 "flush": false, 00:17:03.864 "reset": true, 00:17:03.864 "nvme_admin": false, 00:17:03.864 "nvme_io": false, 00:17:03.864 "nvme_io_md": false, 00:17:03.864 "write_zeroes": true, 00:17:03.864 "zcopy": false, 00:17:03.864 "get_zone_info": false, 00:17:03.864 "zone_management": false, 00:17:03.864 "zone_append": false, 00:17:03.864 "compare": false, 00:17:03.864 "compare_and_write": false, 00:17:03.864 "abort": false, 00:17:03.864 "seek_hole": false, 00:17:03.864 "seek_data": false, 00:17:03.864 "copy": false, 00:17:03.864 "nvme_iov_md": false 00:17:03.864 }, 00:17:03.864 "driver_specific": { 00:17:03.864 "raid": { 00:17:03.864 "uuid": "152f49fe-4894-4c84-a58d-fb9b81de5b88", 00:17:03.864 "strip_size_kb": 64, 00:17:03.864 "state": "online", 00:17:03.864 "raid_level": "raid5f", 00:17:03.864 "superblock": true, 00:17:03.864 "num_base_bdevs": 3, 00:17:03.864 "num_base_bdevs_discovered": 3, 00:17:03.864 "num_base_bdevs_operational": 3, 00:17:03.864 "base_bdevs_list": [ 00:17:03.864 { 00:17:03.864 "name": "NewBaseBdev", 00:17:03.864 "uuid": "3b8b5e15-ef48-43af-ac0f-753e96faf7b5", 00:17:03.864 "is_configured": true, 00:17:03.864 "data_offset": 2048, 00:17:03.864 "data_size": 63488 00:17:03.864 }, 00:17:03.864 { 00:17:03.864 "name": "BaseBdev2", 00:17:03.864 "uuid": "64d99079-7f0a-4aaa-93e7-a6c95f2e4c3c", 00:17:03.864 "is_configured": true, 00:17:03.864 "data_offset": 2048, 00:17:03.864 "data_size": 63488 00:17:03.864 }, 00:17:03.864 { 00:17:03.864 "name": "BaseBdev3", 00:17:03.864 "uuid": "7477050c-6d20-4944-b0d8-56bb3adb0f5a", 00:17:03.864 "is_configured": true, 00:17:03.864 "data_offset": 2048, 00:17:03.864 "data_size": 63488 00:17:03.864 } 00:17:03.864 ] 00:17:03.864 } 00:17:03.864 } 00:17:03.864 }' 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:03.864 BaseBdev2 00:17:03.864 BaseBdev3' 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.864 [2024-11-05 16:31:16.880694] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:03.864 [2024-11-05 16:31:16.880733] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:03.864 [2024-11-05 16:31:16.880842] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:03.864 [2024-11-05 16:31:16.881220] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:03.864 [2024-11-05 16:31:16.881238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80854 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 80854 ']' 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 80854 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80854 00:17:03.864 killing process with pid 80854 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80854' 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 80854 00:17:03.864 16:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 80854 00:17:03.864 [2024-11-05 16:31:16.905669] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:04.432 [2024-11-05 16:31:17.237147] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:05.368 16:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:17:05.368 00:17:05.368 real 0m10.366s 00:17:05.368 user 0m16.251s 00:17:05.368 sys 0m1.693s 00:17:05.368 16:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:05.368 16:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.368 ************************************ 00:17:05.368 END TEST raid5f_state_function_test_sb 00:17:05.368 ************************************ 00:17:05.628 16:31:18 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:17:05.628 16:31:18 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:17:05.628 16:31:18 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:05.628 16:31:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:05.628 ************************************ 00:17:05.628 START TEST raid5f_superblock_test 00:17:05.628 ************************************ 00:17:05.628 16:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid5f 3 00:17:05.628 16:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:17:05.628 16:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:17:05.628 16:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:05.628 16:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:05.628 16:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:05.628 16:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:05.628 16:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:05.628 16:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:05.628 16:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:05.628 16:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:05.628 16:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:05.628 16:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:05.628 16:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:05.628 16:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:17:05.628 16:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:17:05.628 16:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:17:05.628 16:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81473 00:17:05.628 16:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:05.628 16:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81473 00:17:05.628 16:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 81473 ']' 00:17:05.628 16:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.628 16:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:05.628 16:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.628 16:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:05.628 16:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.628 [2024-11-05 16:31:18.580694] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:17:05.628 [2024-11-05 16:31:18.580831] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81473 ] 00:17:05.888 [2024-11-05 16:31:18.773063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.888 [2024-11-05 16:31:18.911714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.147 [2024-11-05 16:31:19.141608] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:06.147 [2024-11-05 16:31:19.141690] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:06.416 16:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:06.416 16:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:17:06.416 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:06.416 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:06.416 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:06.416 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:06.416 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:06.416 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:06.416 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:06.416 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:06.416 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:06.416 16:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.416 16:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.700 malloc1 00:17:06.700 16:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.700 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:06.700 16:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.700 16:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.700 [2024-11-05 16:31:19.536969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:06.700 [2024-11-05 16:31:19.537050] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.700 [2024-11-05 16:31:19.537075] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:06.700 [2024-11-05 16:31:19.537086] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.700 [2024-11-05 16:31:19.539615] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.700 [2024-11-05 16:31:19.539662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:06.700 pt1 00:17:06.700 16:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.700 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:06.700 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:06.700 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:06.700 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:06.700 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:06.700 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:06.701 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:06.701 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:06.701 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:06.701 16:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.701 16:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.701 malloc2 00:17:06.701 16:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.701 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:06.701 16:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.701 16:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.701 [2024-11-05 16:31:19.604637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:06.701 [2024-11-05 16:31:19.604806] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.701 [2024-11-05 16:31:19.604871] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:06.701 [2024-11-05 16:31:19.604916] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.701 [2024-11-05 16:31:19.607513] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.701 [2024-11-05 16:31:19.607622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:06.701 pt2 00:17:06.701 16:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.701 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:06.701 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:06.701 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:06.701 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:06.701 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:06.701 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:06.701 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:06.701 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:06.701 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:17:06.701 16:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.701 16:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.701 malloc3 00:17:06.701 16:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.702 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:06.702 16:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.702 16:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.702 [2024-11-05 16:31:19.680555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:06.702 [2024-11-05 16:31:19.680710] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.702 [2024-11-05 16:31:19.680760] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:06.702 [2024-11-05 16:31:19.680804] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.702 [2024-11-05 16:31:19.683475] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.702 [2024-11-05 16:31:19.683604] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:06.702 pt3 00:17:06.702 16:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.702 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:06.702 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:06.702 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:17:06.702 16:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.702 16:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.702 [2024-11-05 16:31:19.692639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:06.702 [2024-11-05 16:31:19.694773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:06.702 [2024-11-05 16:31:19.694848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:06.702 [2024-11-05 16:31:19.695043] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:06.702 [2024-11-05 16:31:19.695066] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:06.702 [2024-11-05 16:31:19.695360] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:06.702 [2024-11-05 16:31:19.702346] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:06.702 [2024-11-05 16:31:19.702370] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:06.702 [2024-11-05 16:31:19.702617] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:06.702 16:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.702 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:06.702 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.702 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.702 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:06.702 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.702 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:06.702 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.702 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.702 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.702 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.702 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.702 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.702 16:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.702 16:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.703 16:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.703 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.703 "name": "raid_bdev1", 00:17:06.703 "uuid": "9108aea9-c618-4592-89b3-4cc4a47c6681", 00:17:06.703 "strip_size_kb": 64, 00:17:06.703 "state": "online", 00:17:06.703 "raid_level": "raid5f", 00:17:06.703 "superblock": true, 00:17:06.703 "num_base_bdevs": 3, 00:17:06.703 "num_base_bdevs_discovered": 3, 00:17:06.703 "num_base_bdevs_operational": 3, 00:17:06.703 "base_bdevs_list": [ 00:17:06.703 { 00:17:06.703 "name": "pt1", 00:17:06.703 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:06.703 "is_configured": true, 00:17:06.703 "data_offset": 2048, 00:17:06.703 "data_size": 63488 00:17:06.703 }, 00:17:06.703 { 00:17:06.703 "name": "pt2", 00:17:06.703 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:06.703 "is_configured": true, 00:17:06.703 "data_offset": 2048, 00:17:06.703 "data_size": 63488 00:17:06.703 }, 00:17:06.703 { 00:17:06.703 "name": "pt3", 00:17:06.703 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:06.703 "is_configured": true, 00:17:06.703 "data_offset": 2048, 00:17:06.703 "data_size": 63488 00:17:06.703 } 00:17:06.703 ] 00:17:06.703 }' 00:17:06.703 16:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.703 16:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.272 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:07.272 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:07.272 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:07.272 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:07.272 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:07.272 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:07.272 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:07.272 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:07.272 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.272 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.272 [2024-11-05 16:31:20.157710] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:07.272 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.272 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:07.272 "name": "raid_bdev1", 00:17:07.272 "aliases": [ 00:17:07.272 "9108aea9-c618-4592-89b3-4cc4a47c6681" 00:17:07.272 ], 00:17:07.272 "product_name": "Raid Volume", 00:17:07.272 "block_size": 512, 00:17:07.272 "num_blocks": 126976, 00:17:07.272 "uuid": "9108aea9-c618-4592-89b3-4cc4a47c6681", 00:17:07.272 "assigned_rate_limits": { 00:17:07.272 "rw_ios_per_sec": 0, 00:17:07.272 "rw_mbytes_per_sec": 0, 00:17:07.272 "r_mbytes_per_sec": 0, 00:17:07.272 "w_mbytes_per_sec": 0 00:17:07.272 }, 00:17:07.272 "claimed": false, 00:17:07.272 "zoned": false, 00:17:07.272 "supported_io_types": { 00:17:07.272 "read": true, 00:17:07.272 "write": true, 00:17:07.272 "unmap": false, 00:17:07.272 "flush": false, 00:17:07.272 "reset": true, 00:17:07.272 "nvme_admin": false, 00:17:07.272 "nvme_io": false, 00:17:07.272 "nvme_io_md": false, 00:17:07.272 "write_zeroes": true, 00:17:07.272 "zcopy": false, 00:17:07.272 "get_zone_info": false, 00:17:07.272 "zone_management": false, 00:17:07.272 "zone_append": false, 00:17:07.272 "compare": false, 00:17:07.272 "compare_and_write": false, 00:17:07.272 "abort": false, 00:17:07.272 "seek_hole": false, 00:17:07.272 "seek_data": false, 00:17:07.272 "copy": false, 00:17:07.272 "nvme_iov_md": false 00:17:07.272 }, 00:17:07.272 "driver_specific": { 00:17:07.272 "raid": { 00:17:07.272 "uuid": "9108aea9-c618-4592-89b3-4cc4a47c6681", 00:17:07.272 "strip_size_kb": 64, 00:17:07.272 "state": "online", 00:17:07.272 "raid_level": "raid5f", 00:17:07.272 "superblock": true, 00:17:07.272 "num_base_bdevs": 3, 00:17:07.272 "num_base_bdevs_discovered": 3, 00:17:07.272 "num_base_bdevs_operational": 3, 00:17:07.272 "base_bdevs_list": [ 00:17:07.272 { 00:17:07.272 "name": "pt1", 00:17:07.272 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:07.272 "is_configured": true, 00:17:07.272 "data_offset": 2048, 00:17:07.272 "data_size": 63488 00:17:07.272 }, 00:17:07.272 { 00:17:07.272 "name": "pt2", 00:17:07.272 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:07.272 "is_configured": true, 00:17:07.272 "data_offset": 2048, 00:17:07.272 "data_size": 63488 00:17:07.272 }, 00:17:07.272 { 00:17:07.272 "name": "pt3", 00:17:07.272 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:07.272 "is_configured": true, 00:17:07.272 "data_offset": 2048, 00:17:07.272 "data_size": 63488 00:17:07.272 } 00:17:07.272 ] 00:17:07.272 } 00:17:07.272 } 00:17:07.272 }' 00:17:07.272 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:07.272 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:07.272 pt2 00:17:07.272 pt3' 00:17:07.272 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:07.272 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:07.272 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:07.272 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:07.272 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.272 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.272 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:07.272 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.272 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:07.272 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:07.272 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:07.272 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:07.272 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:07.272 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.272 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.532 [2024-11-05 16:31:20.449192] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9108aea9-c618-4592-89b3-4cc4a47c6681 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9108aea9-c618-4592-89b3-4cc4a47c6681 ']' 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.532 [2024-11-05 16:31:20.476888] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:07.532 [2024-11-05 16:31:20.476989] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:07.532 [2024-11-05 16:31:20.477101] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:07.532 [2024-11-05 16:31:20.477219] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:07.532 [2024-11-05 16:31:20.477236] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.532 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.532 [2024-11-05 16:31:20.616750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:07.532 [2024-11-05 16:31:20.618926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:07.533 [2024-11-05 16:31:20.619063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:07.533 [2024-11-05 16:31:20.619134] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:07.533 [2024-11-05 16:31:20.619198] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:07.533 [2024-11-05 16:31:20.619223] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:07.533 [2024-11-05 16:31:20.619244] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:07.533 [2024-11-05 16:31:20.619256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:07.792 request: 00:17:07.792 { 00:17:07.792 "name": "raid_bdev1", 00:17:07.792 "raid_level": "raid5f", 00:17:07.792 "base_bdevs": [ 00:17:07.792 "malloc1", 00:17:07.792 "malloc2", 00:17:07.792 "malloc3" 00:17:07.792 ], 00:17:07.792 "strip_size_kb": 64, 00:17:07.792 "superblock": false, 00:17:07.792 "method": "bdev_raid_create", 00:17:07.792 "req_id": 1 00:17:07.792 } 00:17:07.792 Got JSON-RPC error response 00:17:07.792 response: 00:17:07.792 { 00:17:07.792 "code": -17, 00:17:07.792 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:07.792 } 00:17:07.792 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:07.792 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:17:07.792 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:07.792 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:07.792 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:07.792 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.792 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.792 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:07.792 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.792 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.792 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:07.792 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:07.792 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:07.792 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.792 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.792 [2024-11-05 16:31:20.680585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:07.792 [2024-11-05 16:31:20.680664] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.792 [2024-11-05 16:31:20.680688] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:07.792 [2024-11-05 16:31:20.680699] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.792 [2024-11-05 16:31:20.683278] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.792 [2024-11-05 16:31:20.683329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:07.792 [2024-11-05 16:31:20.683442] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:07.792 [2024-11-05 16:31:20.683506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:07.792 pt1 00:17:07.792 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.792 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:07.792 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.792 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:07.792 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:07.792 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:07.792 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:07.792 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.792 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.792 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.792 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.792 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.792 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.792 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.792 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.792 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.792 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.792 "name": "raid_bdev1", 00:17:07.792 "uuid": "9108aea9-c618-4592-89b3-4cc4a47c6681", 00:17:07.792 "strip_size_kb": 64, 00:17:07.792 "state": "configuring", 00:17:07.792 "raid_level": "raid5f", 00:17:07.792 "superblock": true, 00:17:07.792 "num_base_bdevs": 3, 00:17:07.792 "num_base_bdevs_discovered": 1, 00:17:07.792 "num_base_bdevs_operational": 3, 00:17:07.792 "base_bdevs_list": [ 00:17:07.792 { 00:17:07.792 "name": "pt1", 00:17:07.792 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:07.792 "is_configured": true, 00:17:07.792 "data_offset": 2048, 00:17:07.792 "data_size": 63488 00:17:07.792 }, 00:17:07.792 { 00:17:07.792 "name": null, 00:17:07.792 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:07.792 "is_configured": false, 00:17:07.792 "data_offset": 2048, 00:17:07.792 "data_size": 63488 00:17:07.792 }, 00:17:07.792 { 00:17:07.792 "name": null, 00:17:07.792 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:07.792 "is_configured": false, 00:17:07.792 "data_offset": 2048, 00:17:07.792 "data_size": 63488 00:17:07.792 } 00:17:07.792 ] 00:17:07.792 }' 00:17:07.792 16:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.792 16:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.052 16:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:17:08.052 16:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:08.052 16:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.052 16:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.052 [2024-11-05 16:31:21.103934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:08.052 [2024-11-05 16:31:21.104079] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.052 [2024-11-05 16:31:21.104110] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:08.052 [2024-11-05 16:31:21.104120] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.052 [2024-11-05 16:31:21.104647] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.052 [2024-11-05 16:31:21.104683] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:08.052 [2024-11-05 16:31:21.104789] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:08.052 [2024-11-05 16:31:21.104813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:08.052 pt2 00:17:08.052 16:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.052 16:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:17:08.052 16:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.052 16:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.052 [2024-11-05 16:31:21.115938] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:08.052 16:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.052 16:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:08.052 16:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.052 16:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:08.052 16:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:08.052 16:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:08.052 16:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:08.052 16:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.052 16:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.052 16:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.052 16:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.052 16:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.052 16:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.052 16:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.052 16:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.311 16:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.311 16:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.311 "name": "raid_bdev1", 00:17:08.311 "uuid": "9108aea9-c618-4592-89b3-4cc4a47c6681", 00:17:08.311 "strip_size_kb": 64, 00:17:08.311 "state": "configuring", 00:17:08.311 "raid_level": "raid5f", 00:17:08.311 "superblock": true, 00:17:08.311 "num_base_bdevs": 3, 00:17:08.311 "num_base_bdevs_discovered": 1, 00:17:08.311 "num_base_bdevs_operational": 3, 00:17:08.311 "base_bdevs_list": [ 00:17:08.311 { 00:17:08.311 "name": "pt1", 00:17:08.311 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:08.311 "is_configured": true, 00:17:08.311 "data_offset": 2048, 00:17:08.311 "data_size": 63488 00:17:08.311 }, 00:17:08.311 { 00:17:08.311 "name": null, 00:17:08.311 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:08.311 "is_configured": false, 00:17:08.311 "data_offset": 0, 00:17:08.311 "data_size": 63488 00:17:08.311 }, 00:17:08.311 { 00:17:08.311 "name": null, 00:17:08.311 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:08.311 "is_configured": false, 00:17:08.311 "data_offset": 2048, 00:17:08.311 "data_size": 63488 00:17:08.311 } 00:17:08.311 ] 00:17:08.311 }' 00:17:08.311 16:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.311 16:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.572 16:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:08.572 16:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:08.572 16:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:08.572 16:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.572 16:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.572 [2024-11-05 16:31:21.571114] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:08.572 [2024-11-05 16:31:21.571232] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.572 [2024-11-05 16:31:21.571279] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:08.572 [2024-11-05 16:31:21.571316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.572 [2024-11-05 16:31:21.571815] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.572 [2024-11-05 16:31:21.571878] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:08.572 [2024-11-05 16:31:21.571998] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:08.572 [2024-11-05 16:31:21.572053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:08.572 pt2 00:17:08.572 16:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.572 16:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:08.572 16:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:08.572 16:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:08.572 16:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.572 16:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.572 [2024-11-05 16:31:21.579077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:08.572 [2024-11-05 16:31:21.579164] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.572 [2024-11-05 16:31:21.579202] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:08.572 [2024-11-05 16:31:21.579234] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.572 [2024-11-05 16:31:21.579664] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.572 [2024-11-05 16:31:21.579729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:08.572 [2024-11-05 16:31:21.579827] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:08.572 [2024-11-05 16:31:21.579881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:08.572 [2024-11-05 16:31:21.580045] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:08.572 [2024-11-05 16:31:21.580089] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:08.572 [2024-11-05 16:31:21.580383] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:08.572 [2024-11-05 16:31:21.586612] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:08.572 [2024-11-05 16:31:21.586698] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:08.572 [2024-11-05 16:31:21.587031] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.572 pt3 00:17:08.572 16:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.572 16:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:08.572 16:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:08.572 16:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:08.572 16:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.572 16:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.572 16:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:08.572 16:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:08.572 16:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:08.572 16:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.572 16:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.572 16:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.572 16:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.572 16:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.572 16:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.572 16:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.572 16:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.572 16:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.572 16:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.572 "name": "raid_bdev1", 00:17:08.572 "uuid": "9108aea9-c618-4592-89b3-4cc4a47c6681", 00:17:08.572 "strip_size_kb": 64, 00:17:08.572 "state": "online", 00:17:08.572 "raid_level": "raid5f", 00:17:08.572 "superblock": true, 00:17:08.572 "num_base_bdevs": 3, 00:17:08.572 "num_base_bdevs_discovered": 3, 00:17:08.572 "num_base_bdevs_operational": 3, 00:17:08.572 "base_bdevs_list": [ 00:17:08.572 { 00:17:08.572 "name": "pt1", 00:17:08.572 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:08.572 "is_configured": true, 00:17:08.572 "data_offset": 2048, 00:17:08.572 "data_size": 63488 00:17:08.572 }, 00:17:08.572 { 00:17:08.572 "name": "pt2", 00:17:08.572 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:08.572 "is_configured": true, 00:17:08.572 "data_offset": 2048, 00:17:08.572 "data_size": 63488 00:17:08.572 }, 00:17:08.573 { 00:17:08.573 "name": "pt3", 00:17:08.573 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:08.573 "is_configured": true, 00:17:08.573 "data_offset": 2048, 00:17:08.573 "data_size": 63488 00:17:08.573 } 00:17:08.573 ] 00:17:08.573 }' 00:17:08.573 16:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.573 16:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.142 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:09.142 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:09.142 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:09.142 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:09.142 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:09.142 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:09.142 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:09.142 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:09.142 16:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.142 16:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.142 [2024-11-05 16:31:22.069279] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:09.142 16:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.142 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:09.142 "name": "raid_bdev1", 00:17:09.142 "aliases": [ 00:17:09.142 "9108aea9-c618-4592-89b3-4cc4a47c6681" 00:17:09.142 ], 00:17:09.142 "product_name": "Raid Volume", 00:17:09.142 "block_size": 512, 00:17:09.142 "num_blocks": 126976, 00:17:09.142 "uuid": "9108aea9-c618-4592-89b3-4cc4a47c6681", 00:17:09.142 "assigned_rate_limits": { 00:17:09.142 "rw_ios_per_sec": 0, 00:17:09.142 "rw_mbytes_per_sec": 0, 00:17:09.142 "r_mbytes_per_sec": 0, 00:17:09.142 "w_mbytes_per_sec": 0 00:17:09.142 }, 00:17:09.142 "claimed": false, 00:17:09.142 "zoned": false, 00:17:09.142 "supported_io_types": { 00:17:09.142 "read": true, 00:17:09.142 "write": true, 00:17:09.142 "unmap": false, 00:17:09.142 "flush": false, 00:17:09.142 "reset": true, 00:17:09.142 "nvme_admin": false, 00:17:09.142 "nvme_io": false, 00:17:09.142 "nvme_io_md": false, 00:17:09.142 "write_zeroes": true, 00:17:09.142 "zcopy": false, 00:17:09.142 "get_zone_info": false, 00:17:09.142 "zone_management": false, 00:17:09.142 "zone_append": false, 00:17:09.142 "compare": false, 00:17:09.142 "compare_and_write": false, 00:17:09.142 "abort": false, 00:17:09.142 "seek_hole": false, 00:17:09.142 "seek_data": false, 00:17:09.142 "copy": false, 00:17:09.142 "nvme_iov_md": false 00:17:09.142 }, 00:17:09.142 "driver_specific": { 00:17:09.142 "raid": { 00:17:09.142 "uuid": "9108aea9-c618-4592-89b3-4cc4a47c6681", 00:17:09.142 "strip_size_kb": 64, 00:17:09.142 "state": "online", 00:17:09.142 "raid_level": "raid5f", 00:17:09.142 "superblock": true, 00:17:09.142 "num_base_bdevs": 3, 00:17:09.142 "num_base_bdevs_discovered": 3, 00:17:09.142 "num_base_bdevs_operational": 3, 00:17:09.142 "base_bdevs_list": [ 00:17:09.142 { 00:17:09.142 "name": "pt1", 00:17:09.142 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:09.142 "is_configured": true, 00:17:09.143 "data_offset": 2048, 00:17:09.143 "data_size": 63488 00:17:09.143 }, 00:17:09.143 { 00:17:09.143 "name": "pt2", 00:17:09.143 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:09.143 "is_configured": true, 00:17:09.143 "data_offset": 2048, 00:17:09.143 "data_size": 63488 00:17:09.143 }, 00:17:09.143 { 00:17:09.143 "name": "pt3", 00:17:09.143 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:09.143 "is_configured": true, 00:17:09.143 "data_offset": 2048, 00:17:09.143 "data_size": 63488 00:17:09.143 } 00:17:09.143 ] 00:17:09.143 } 00:17:09.143 } 00:17:09.143 }' 00:17:09.143 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:09.143 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:09.143 pt2 00:17:09.143 pt3' 00:17:09.143 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:09.143 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:09.143 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:09.143 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:09.143 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:09.143 16:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.143 16:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.143 16:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.403 [2024-11-05 16:31:22.372777] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9108aea9-c618-4592-89b3-4cc4a47c6681 '!=' 9108aea9-c618-4592-89b3-4cc4a47c6681 ']' 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.403 [2024-11-05 16:31:22.412556] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.403 "name": "raid_bdev1", 00:17:09.403 "uuid": "9108aea9-c618-4592-89b3-4cc4a47c6681", 00:17:09.403 "strip_size_kb": 64, 00:17:09.403 "state": "online", 00:17:09.403 "raid_level": "raid5f", 00:17:09.403 "superblock": true, 00:17:09.403 "num_base_bdevs": 3, 00:17:09.403 "num_base_bdevs_discovered": 2, 00:17:09.403 "num_base_bdevs_operational": 2, 00:17:09.403 "base_bdevs_list": [ 00:17:09.403 { 00:17:09.403 "name": null, 00:17:09.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.403 "is_configured": false, 00:17:09.403 "data_offset": 0, 00:17:09.403 "data_size": 63488 00:17:09.403 }, 00:17:09.403 { 00:17:09.403 "name": "pt2", 00:17:09.403 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:09.403 "is_configured": true, 00:17:09.403 "data_offset": 2048, 00:17:09.403 "data_size": 63488 00:17:09.403 }, 00:17:09.403 { 00:17:09.403 "name": "pt3", 00:17:09.403 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:09.403 "is_configured": true, 00:17:09.403 "data_offset": 2048, 00:17:09.403 "data_size": 63488 00:17:09.403 } 00:17:09.403 ] 00:17:09.403 }' 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.403 16:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.973 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:09.973 16:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.973 16:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.973 [2024-11-05 16:31:22.867715] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:09.973 [2024-11-05 16:31:22.867748] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:09.973 [2024-11-05 16:31:22.867836] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:09.973 [2024-11-05 16:31:22.867902] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:09.973 [2024-11-05 16:31:22.867918] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:09.973 16:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.973 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.973 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:09.973 16:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.973 16:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.973 16:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.973 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:09.973 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:09.973 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:09.973 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:09.973 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:09.973 16:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.973 16:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.973 16:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.973 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:09.973 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:09.973 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:17:09.973 16:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.973 16:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.973 16:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.973 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:09.973 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:09.973 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:09.973 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:09.973 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:09.973 16:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.973 16:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.973 [2024-11-05 16:31:22.955513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:09.974 [2024-11-05 16:31:22.955583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.974 [2024-11-05 16:31:22.955600] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:17:09.974 [2024-11-05 16:31:22.955611] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.974 [2024-11-05 16:31:22.957843] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.974 [2024-11-05 16:31:22.957884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:09.974 [2024-11-05 16:31:22.957968] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:09.974 [2024-11-05 16:31:22.958023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:09.974 pt2 00:17:09.974 16:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.974 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:17:09.974 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.974 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:09.974 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:09.974 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:09.974 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:09.974 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.974 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.974 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.974 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.974 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.974 16:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.974 16:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.974 16:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.974 16:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.974 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.974 "name": "raid_bdev1", 00:17:09.974 "uuid": "9108aea9-c618-4592-89b3-4cc4a47c6681", 00:17:09.974 "strip_size_kb": 64, 00:17:09.974 "state": "configuring", 00:17:09.974 "raid_level": "raid5f", 00:17:09.974 "superblock": true, 00:17:09.974 "num_base_bdevs": 3, 00:17:09.974 "num_base_bdevs_discovered": 1, 00:17:09.974 "num_base_bdevs_operational": 2, 00:17:09.974 "base_bdevs_list": [ 00:17:09.974 { 00:17:09.974 "name": null, 00:17:09.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.974 "is_configured": false, 00:17:09.974 "data_offset": 2048, 00:17:09.974 "data_size": 63488 00:17:09.974 }, 00:17:09.974 { 00:17:09.974 "name": "pt2", 00:17:09.974 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:09.974 "is_configured": true, 00:17:09.974 "data_offset": 2048, 00:17:09.974 "data_size": 63488 00:17:09.974 }, 00:17:09.974 { 00:17:09.974 "name": null, 00:17:09.974 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:09.974 "is_configured": false, 00:17:09.974 "data_offset": 2048, 00:17:09.974 "data_size": 63488 00:17:09.974 } 00:17:09.974 ] 00:17:09.974 }' 00:17:09.974 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.974 16:31:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.542 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:10.542 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:10.542 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:17:10.542 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:10.542 16:31:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.542 16:31:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.542 [2024-11-05 16:31:23.374851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:10.542 [2024-11-05 16:31:23.375001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.542 [2024-11-05 16:31:23.375056] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:10.542 [2024-11-05 16:31:23.375101] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.542 [2024-11-05 16:31:23.375754] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.542 [2024-11-05 16:31:23.375859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:10.542 [2024-11-05 16:31:23.376039] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:10.542 [2024-11-05 16:31:23.376149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:10.542 [2024-11-05 16:31:23.376390] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:10.542 [2024-11-05 16:31:23.376473] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:10.542 [2024-11-05 16:31:23.376855] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:10.542 [2024-11-05 16:31:23.383597] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:10.542 [2024-11-05 16:31:23.383664] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:10.542 [2024-11-05 16:31:23.384114] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:10.542 pt3 00:17:10.542 16:31:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.542 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:10.542 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:10.542 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:10.542 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:10.542 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:10.542 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:10.542 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.542 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.542 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.542 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.542 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.542 16:31:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.542 16:31:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.542 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.542 16:31:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.542 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.542 "name": "raid_bdev1", 00:17:10.542 "uuid": "9108aea9-c618-4592-89b3-4cc4a47c6681", 00:17:10.542 "strip_size_kb": 64, 00:17:10.542 "state": "online", 00:17:10.542 "raid_level": "raid5f", 00:17:10.542 "superblock": true, 00:17:10.542 "num_base_bdevs": 3, 00:17:10.542 "num_base_bdevs_discovered": 2, 00:17:10.542 "num_base_bdevs_operational": 2, 00:17:10.542 "base_bdevs_list": [ 00:17:10.542 { 00:17:10.542 "name": null, 00:17:10.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.542 "is_configured": false, 00:17:10.542 "data_offset": 2048, 00:17:10.542 "data_size": 63488 00:17:10.542 }, 00:17:10.542 { 00:17:10.542 "name": "pt2", 00:17:10.542 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:10.542 "is_configured": true, 00:17:10.542 "data_offset": 2048, 00:17:10.542 "data_size": 63488 00:17:10.542 }, 00:17:10.542 { 00:17:10.542 "name": "pt3", 00:17:10.542 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:10.542 "is_configured": true, 00:17:10.542 "data_offset": 2048, 00:17:10.542 "data_size": 63488 00:17:10.542 } 00:17:10.542 ] 00:17:10.542 }' 00:17:10.542 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.542 16:31:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.802 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:10.802 16:31:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.802 16:31:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.802 [2024-11-05 16:31:23.872388] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:10.802 [2024-11-05 16:31:23.872436] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:10.802 [2024-11-05 16:31:23.872554] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:10.802 [2024-11-05 16:31:23.872640] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:10.802 [2024-11-05 16:31:23.872653] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:10.802 16:31:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.802 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.802 16:31:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.802 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:10.802 16:31:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.061 16:31:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.061 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:11.061 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:11.061 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:17:11.061 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:17:11.061 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:17:11.061 16:31:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.061 16:31:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.061 16:31:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.061 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:11.061 16:31:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.061 16:31:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.061 [2024-11-05 16:31:23.948300] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:11.061 [2024-11-05 16:31:23.948401] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.061 [2024-11-05 16:31:23.948430] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:11.061 [2024-11-05 16:31:23.948443] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.061 [2024-11-05 16:31:23.951433] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.061 [2024-11-05 16:31:23.951490] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:11.061 [2024-11-05 16:31:23.951625] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:11.061 [2024-11-05 16:31:23.951686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:11.061 [2024-11-05 16:31:23.951853] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:11.061 [2024-11-05 16:31:23.951873] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:11.061 [2024-11-05 16:31:23.951895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:11.061 [2024-11-05 16:31:23.951982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:11.061 pt1 00:17:11.061 16:31:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.061 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:17:11.061 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:17:11.061 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:11.061 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:11.061 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:11.061 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:11.061 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:11.061 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.061 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.061 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.061 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.061 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.061 16:31:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.061 16:31:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.061 16:31:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.061 16:31:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.061 16:31:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.061 "name": "raid_bdev1", 00:17:11.061 "uuid": "9108aea9-c618-4592-89b3-4cc4a47c6681", 00:17:11.061 "strip_size_kb": 64, 00:17:11.061 "state": "configuring", 00:17:11.061 "raid_level": "raid5f", 00:17:11.061 "superblock": true, 00:17:11.061 "num_base_bdevs": 3, 00:17:11.061 "num_base_bdevs_discovered": 1, 00:17:11.061 "num_base_bdevs_operational": 2, 00:17:11.061 "base_bdevs_list": [ 00:17:11.061 { 00:17:11.061 "name": null, 00:17:11.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.061 "is_configured": false, 00:17:11.061 "data_offset": 2048, 00:17:11.061 "data_size": 63488 00:17:11.061 }, 00:17:11.061 { 00:17:11.061 "name": "pt2", 00:17:11.061 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:11.061 "is_configured": true, 00:17:11.061 "data_offset": 2048, 00:17:11.061 "data_size": 63488 00:17:11.061 }, 00:17:11.061 { 00:17:11.061 "name": null, 00:17:11.061 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:11.061 "is_configured": false, 00:17:11.061 "data_offset": 2048, 00:17:11.061 "data_size": 63488 00:17:11.061 } 00:17:11.061 ] 00:17:11.061 }' 00:17:11.061 16:31:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.061 16:31:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.630 16:31:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:17:11.630 16:31:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:11.630 16:31:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.630 16:31:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.630 16:31:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.630 16:31:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:17:11.630 16:31:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:11.630 16:31:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.630 16:31:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.631 [2024-11-05 16:31:24.467437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:11.631 [2024-11-05 16:31:24.467593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.631 [2024-11-05 16:31:24.467639] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:11.631 [2024-11-05 16:31:24.467672] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.631 [2024-11-05 16:31:24.468289] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.631 [2024-11-05 16:31:24.468370] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:11.631 [2024-11-05 16:31:24.468508] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:11.631 [2024-11-05 16:31:24.468586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:11.631 [2024-11-05 16:31:24.468796] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:11.631 [2024-11-05 16:31:24.468842] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:11.631 [2024-11-05 16:31:24.469175] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:11.631 [2024-11-05 16:31:24.475667] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:11.631 [2024-11-05 16:31:24.475754] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:11.631 [2024-11-05 16:31:24.476107] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.631 pt3 00:17:11.631 16:31:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.631 16:31:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:11.631 16:31:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:11.631 16:31:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:11.631 16:31:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:11.631 16:31:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:11.631 16:31:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:11.631 16:31:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.631 16:31:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.631 16:31:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.631 16:31:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.631 16:31:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.631 16:31:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.631 16:31:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.631 16:31:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.631 16:31:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.631 16:31:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.631 "name": "raid_bdev1", 00:17:11.631 "uuid": "9108aea9-c618-4592-89b3-4cc4a47c6681", 00:17:11.631 "strip_size_kb": 64, 00:17:11.631 "state": "online", 00:17:11.631 "raid_level": "raid5f", 00:17:11.631 "superblock": true, 00:17:11.631 "num_base_bdevs": 3, 00:17:11.631 "num_base_bdevs_discovered": 2, 00:17:11.631 "num_base_bdevs_operational": 2, 00:17:11.631 "base_bdevs_list": [ 00:17:11.631 { 00:17:11.631 "name": null, 00:17:11.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.631 "is_configured": false, 00:17:11.631 "data_offset": 2048, 00:17:11.631 "data_size": 63488 00:17:11.631 }, 00:17:11.631 { 00:17:11.631 "name": "pt2", 00:17:11.631 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:11.631 "is_configured": true, 00:17:11.631 "data_offset": 2048, 00:17:11.631 "data_size": 63488 00:17:11.631 }, 00:17:11.631 { 00:17:11.631 "name": "pt3", 00:17:11.631 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:11.631 "is_configured": true, 00:17:11.631 "data_offset": 2048, 00:17:11.631 "data_size": 63488 00:17:11.631 } 00:17:11.631 ] 00:17:11.631 }' 00:17:11.631 16:31:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.631 16:31:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.891 16:31:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:11.891 16:31:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:11.891 16:31:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.891 16:31:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.150 16:31:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.150 16:31:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:12.150 16:31:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:12.150 16:31:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:12.150 16:31:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.150 16:31:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.150 [2024-11-05 16:31:25.023659] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:12.150 16:31:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.150 16:31:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 9108aea9-c618-4592-89b3-4cc4a47c6681 '!=' 9108aea9-c618-4592-89b3-4cc4a47c6681 ']' 00:17:12.150 16:31:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81473 00:17:12.150 16:31:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 81473 ']' 00:17:12.150 16:31:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # kill -0 81473 00:17:12.150 16:31:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # uname 00:17:12.150 16:31:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:12.150 16:31:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81473 00:17:12.150 killing process with pid 81473 00:17:12.150 16:31:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:12.150 16:31:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:12.150 16:31:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81473' 00:17:12.150 16:31:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # kill 81473 00:17:12.150 [2024-11-05 16:31:25.105811] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:12.150 [2024-11-05 16:31:25.105933] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:12.150 16:31:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@976 -- # wait 81473 00:17:12.150 [2024-11-05 16:31:25.106009] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:12.150 [2024-11-05 16:31:25.106024] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:12.410 [2024-11-05 16:31:25.430069] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:13.789 16:31:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:13.789 00:17:13.789 real 0m8.206s 00:17:13.789 user 0m12.768s 00:17:13.789 sys 0m1.506s 00:17:13.789 16:31:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:13.789 16:31:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.789 ************************************ 00:17:13.789 END TEST raid5f_superblock_test 00:17:13.789 ************************************ 00:17:13.789 16:31:26 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:17:13.789 16:31:26 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:17:13.789 16:31:26 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:17:13.789 16:31:26 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:13.789 16:31:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:13.789 ************************************ 00:17:13.789 START TEST raid5f_rebuild_test 00:17:13.789 ************************************ 00:17:13.789 16:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 3 false false true 00:17:13.789 16:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:13.789 16:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:17:13.789 16:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:13.789 16:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:13.789 16:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:13.789 16:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:13.789 16:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:13.789 16:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:13.789 16:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:13.789 16:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:13.789 16:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:13.789 16:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:13.789 16:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:13.789 16:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:13.789 16:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:13.789 16:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:13.789 16:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:13.789 16:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:13.789 16:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:13.789 16:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:13.789 16:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:13.789 16:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:13.789 16:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:13.789 16:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:13.789 16:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:13.790 16:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:13.790 16:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:13.790 16:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:13.790 16:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81918 00:17:13.790 16:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81918 00:17:13.790 16:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:13.790 16:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 81918 ']' 00:17:13.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.790 16:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.790 16:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:13.790 16:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.790 16:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:13.790 16:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.790 [2024-11-05 16:31:26.872892] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:17:13.790 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:13.790 Zero copy mechanism will not be used. 00:17:13.790 [2024-11-05 16:31:26.873120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81918 ] 00:17:14.049 [2024-11-05 16:31:27.050578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.308 [2024-11-05 16:31:27.183942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.567 [2024-11-05 16:31:27.435031] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:14.567 [2024-11-05 16:31:27.435106] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:14.827 16:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:14.827 16:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:17:14.827 16:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:14.827 16:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:14.827 16:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.827 16:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.827 BaseBdev1_malloc 00:17:14.827 16:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.827 16:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:14.827 16:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.827 16:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.827 [2024-11-05 16:31:27.834760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:14.827 [2024-11-05 16:31:27.834838] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.827 [2024-11-05 16:31:27.834865] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:14.827 [2024-11-05 16:31:27.834878] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.827 [2024-11-05 16:31:27.837270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.827 [2024-11-05 16:31:27.837317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:14.827 BaseBdev1 00:17:14.827 16:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.827 16:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:14.827 16:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:14.828 16:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.828 16:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.828 BaseBdev2_malloc 00:17:14.828 16:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.828 16:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:14.828 16:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.828 16:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.828 [2024-11-05 16:31:27.900201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:14.828 [2024-11-05 16:31:27.900338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.828 [2024-11-05 16:31:27.900388] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:14.828 [2024-11-05 16:31:27.900404] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.828 [2024-11-05 16:31:27.902871] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.828 [2024-11-05 16:31:27.902918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:14.828 BaseBdev2 00:17:14.828 16:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.828 16:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:14.828 16:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:14.828 16:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.828 16:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.087 BaseBdev3_malloc 00:17:15.087 16:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.087 16:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:15.087 16:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.087 16:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.087 [2024-11-05 16:31:27.970080] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:15.087 [2024-11-05 16:31:27.970197] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.087 [2024-11-05 16:31:27.970229] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:15.087 [2024-11-05 16:31:27.970259] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.087 [2024-11-05 16:31:27.972712] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.088 [2024-11-05 16:31:27.972759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:15.088 BaseBdev3 00:17:15.088 16:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.088 16:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:15.088 16:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.088 16:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.088 spare_malloc 00:17:15.088 16:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.088 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:15.088 16:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.088 16:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.088 spare_delay 00:17:15.088 16:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.088 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:15.088 16:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.088 16:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.088 [2024-11-05 16:31:28.042295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:15.088 [2024-11-05 16:31:28.042362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.088 [2024-11-05 16:31:28.042384] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:15.088 [2024-11-05 16:31:28.042397] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.088 [2024-11-05 16:31:28.044904] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.088 [2024-11-05 16:31:28.044952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:15.088 spare 00:17:15.088 16:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.088 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:17:15.088 16:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.088 16:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.088 [2024-11-05 16:31:28.054366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:15.088 [2024-11-05 16:31:28.056474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:15.088 [2024-11-05 16:31:28.056640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:15.088 [2024-11-05 16:31:28.056766] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:15.088 [2024-11-05 16:31:28.056782] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:17:15.088 [2024-11-05 16:31:28.057148] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:15.088 [2024-11-05 16:31:28.063885] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:15.088 [2024-11-05 16:31:28.063962] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:15.088 [2024-11-05 16:31:28.064249] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.088 16:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.088 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:15.088 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.088 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.088 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:15.088 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:15.088 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:15.088 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.088 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.088 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.088 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.088 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.088 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.088 16:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.088 16:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.088 16:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.088 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.088 "name": "raid_bdev1", 00:17:15.088 "uuid": "c78e205f-51e9-4a2b-9c5e-05ff5f6bbbe2", 00:17:15.088 "strip_size_kb": 64, 00:17:15.088 "state": "online", 00:17:15.088 "raid_level": "raid5f", 00:17:15.088 "superblock": false, 00:17:15.088 "num_base_bdevs": 3, 00:17:15.088 "num_base_bdevs_discovered": 3, 00:17:15.088 "num_base_bdevs_operational": 3, 00:17:15.088 "base_bdevs_list": [ 00:17:15.088 { 00:17:15.088 "name": "BaseBdev1", 00:17:15.088 "uuid": "b0ee129b-104f-5110-9140-db2c64c1ff2d", 00:17:15.088 "is_configured": true, 00:17:15.088 "data_offset": 0, 00:17:15.088 "data_size": 65536 00:17:15.088 }, 00:17:15.088 { 00:17:15.088 "name": "BaseBdev2", 00:17:15.088 "uuid": "70912df4-5792-54b4-a940-f6e78588a27e", 00:17:15.088 "is_configured": true, 00:17:15.088 "data_offset": 0, 00:17:15.088 "data_size": 65536 00:17:15.088 }, 00:17:15.088 { 00:17:15.088 "name": "BaseBdev3", 00:17:15.088 "uuid": "2b76b5db-44e9-5f83-b016-a7a8bf5e6786", 00:17:15.088 "is_configured": true, 00:17:15.088 "data_offset": 0, 00:17:15.088 "data_size": 65536 00:17:15.088 } 00:17:15.088 ] 00:17:15.088 }' 00:17:15.088 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.088 16:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.657 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:15.657 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:15.657 16:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.657 16:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.657 [2024-11-05 16:31:28.543341] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:15.657 16:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.657 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:17:15.657 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:15.657 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.657 16:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.657 16:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.657 16:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.657 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:15.657 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:15.657 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:15.657 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:15.657 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:15.657 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:15.657 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:15.657 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:15.657 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:15.657 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:15.657 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:15.657 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:15.657 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:15.657 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:15.916 [2024-11-05 16:31:28.850652] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:15.916 /dev/nbd0 00:17:15.916 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:15.916 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:15.916 16:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:15.916 16:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:17:15.916 16:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:15.916 16:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:15.916 16:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:15.916 16:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:17:15.916 16:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:15.916 16:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:15.916 16:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:15.916 1+0 records in 00:17:15.916 1+0 records out 00:17:15.916 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408043 s, 10.0 MB/s 00:17:15.916 16:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:15.916 16:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:17:15.916 16:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:15.916 16:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:15.916 16:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:17:15.916 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:15.916 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:15.916 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:15.916 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:17:15.916 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:17:15.916 16:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:17:16.490 512+0 records in 00:17:16.490 512+0 records out 00:17:16.490 67108864 bytes (67 MB, 64 MiB) copied, 0.433446 s, 155 MB/s 00:17:16.490 16:31:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:16.490 16:31:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:16.490 16:31:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:16.490 16:31:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:16.490 16:31:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:16.490 16:31:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:16.490 16:31:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:16.748 16:31:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:16.749 [2024-11-05 16:31:29.614605] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:16.749 16:31:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:16.749 16:31:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:16.749 16:31:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:16.749 16:31:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:16.749 16:31:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:16.749 16:31:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:16.749 16:31:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:16.749 16:31:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:16.749 16:31:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.749 16:31:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.749 [2024-11-05 16:31:29.627553] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:16.749 16:31:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.749 16:31:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:16.749 16:31:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:16.749 16:31:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:16.749 16:31:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:16.749 16:31:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:16.749 16:31:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:16.749 16:31:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.749 16:31:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.749 16:31:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.749 16:31:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.749 16:31:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.749 16:31:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.749 16:31:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.749 16:31:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.749 16:31:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.749 16:31:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.749 "name": "raid_bdev1", 00:17:16.749 "uuid": "c78e205f-51e9-4a2b-9c5e-05ff5f6bbbe2", 00:17:16.749 "strip_size_kb": 64, 00:17:16.749 "state": "online", 00:17:16.749 "raid_level": "raid5f", 00:17:16.749 "superblock": false, 00:17:16.749 "num_base_bdevs": 3, 00:17:16.749 "num_base_bdevs_discovered": 2, 00:17:16.749 "num_base_bdevs_operational": 2, 00:17:16.749 "base_bdevs_list": [ 00:17:16.749 { 00:17:16.749 "name": null, 00:17:16.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.749 "is_configured": false, 00:17:16.749 "data_offset": 0, 00:17:16.749 "data_size": 65536 00:17:16.749 }, 00:17:16.749 { 00:17:16.749 "name": "BaseBdev2", 00:17:16.749 "uuid": "70912df4-5792-54b4-a940-f6e78588a27e", 00:17:16.749 "is_configured": true, 00:17:16.749 "data_offset": 0, 00:17:16.749 "data_size": 65536 00:17:16.749 }, 00:17:16.749 { 00:17:16.749 "name": "BaseBdev3", 00:17:16.749 "uuid": "2b76b5db-44e9-5f83-b016-a7a8bf5e6786", 00:17:16.749 "is_configured": true, 00:17:16.749 "data_offset": 0, 00:17:16.749 "data_size": 65536 00:17:16.749 } 00:17:16.749 ] 00:17:16.749 }' 00:17:16.749 16:31:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.749 16:31:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.008 16:31:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:17.008 16:31:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.008 16:31:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.008 [2024-11-05 16:31:30.038884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:17.008 [2024-11-05 16:31:30.059696] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:17:17.008 16:31:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.008 16:31:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:17.008 [2024-11-05 16:31:30.069378] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:18.387 16:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:18.387 16:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.387 16:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:18.387 16:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:18.387 16:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.387 16:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.387 16:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.387 16:31:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.387 16:31:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.387 16:31:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.387 16:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.387 "name": "raid_bdev1", 00:17:18.387 "uuid": "c78e205f-51e9-4a2b-9c5e-05ff5f6bbbe2", 00:17:18.387 "strip_size_kb": 64, 00:17:18.387 "state": "online", 00:17:18.387 "raid_level": "raid5f", 00:17:18.387 "superblock": false, 00:17:18.387 "num_base_bdevs": 3, 00:17:18.387 "num_base_bdevs_discovered": 3, 00:17:18.387 "num_base_bdevs_operational": 3, 00:17:18.387 "process": { 00:17:18.387 "type": "rebuild", 00:17:18.387 "target": "spare", 00:17:18.387 "progress": { 00:17:18.387 "blocks": 18432, 00:17:18.387 "percent": 14 00:17:18.387 } 00:17:18.387 }, 00:17:18.387 "base_bdevs_list": [ 00:17:18.387 { 00:17:18.387 "name": "spare", 00:17:18.387 "uuid": "a5140125-5c79-57ad-8bb6-fabe303487cc", 00:17:18.387 "is_configured": true, 00:17:18.387 "data_offset": 0, 00:17:18.387 "data_size": 65536 00:17:18.387 }, 00:17:18.387 { 00:17:18.387 "name": "BaseBdev2", 00:17:18.387 "uuid": "70912df4-5792-54b4-a940-f6e78588a27e", 00:17:18.387 "is_configured": true, 00:17:18.387 "data_offset": 0, 00:17:18.387 "data_size": 65536 00:17:18.387 }, 00:17:18.387 { 00:17:18.387 "name": "BaseBdev3", 00:17:18.387 "uuid": "2b76b5db-44e9-5f83-b016-a7a8bf5e6786", 00:17:18.387 "is_configured": true, 00:17:18.387 "data_offset": 0, 00:17:18.387 "data_size": 65536 00:17:18.387 } 00:17:18.387 ] 00:17:18.387 }' 00:17:18.387 16:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.387 16:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:18.387 16:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.387 16:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:18.387 16:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:18.387 16:31:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.387 16:31:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.387 [2024-11-05 16:31:31.216961] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:18.387 [2024-11-05 16:31:31.282131] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:18.387 [2024-11-05 16:31:31.282205] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:18.387 [2024-11-05 16:31:31.282229] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:18.387 [2024-11-05 16:31:31.282239] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:18.387 16:31:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.387 16:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:18.387 16:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.387 16:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.387 16:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:18.387 16:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:18.387 16:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:18.387 16:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.387 16:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.387 16:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.387 16:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.387 16:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.387 16:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.387 16:31:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.387 16:31:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.387 16:31:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.387 16:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.387 "name": "raid_bdev1", 00:17:18.387 "uuid": "c78e205f-51e9-4a2b-9c5e-05ff5f6bbbe2", 00:17:18.387 "strip_size_kb": 64, 00:17:18.387 "state": "online", 00:17:18.387 "raid_level": "raid5f", 00:17:18.387 "superblock": false, 00:17:18.387 "num_base_bdevs": 3, 00:17:18.387 "num_base_bdevs_discovered": 2, 00:17:18.387 "num_base_bdevs_operational": 2, 00:17:18.387 "base_bdevs_list": [ 00:17:18.387 { 00:17:18.387 "name": null, 00:17:18.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.387 "is_configured": false, 00:17:18.387 "data_offset": 0, 00:17:18.387 "data_size": 65536 00:17:18.387 }, 00:17:18.387 { 00:17:18.387 "name": "BaseBdev2", 00:17:18.387 "uuid": "70912df4-5792-54b4-a940-f6e78588a27e", 00:17:18.387 "is_configured": true, 00:17:18.387 "data_offset": 0, 00:17:18.387 "data_size": 65536 00:17:18.387 }, 00:17:18.387 { 00:17:18.387 "name": "BaseBdev3", 00:17:18.387 "uuid": "2b76b5db-44e9-5f83-b016-a7a8bf5e6786", 00:17:18.387 "is_configured": true, 00:17:18.387 "data_offset": 0, 00:17:18.387 "data_size": 65536 00:17:18.387 } 00:17:18.387 ] 00:17:18.387 }' 00:17:18.387 16:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.387 16:31:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.956 16:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:18.956 16:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.956 16:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:18.956 16:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:18.956 16:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.956 16:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.956 16:31:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.956 16:31:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.956 16:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.956 16:31:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.956 16:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.956 "name": "raid_bdev1", 00:17:18.956 "uuid": "c78e205f-51e9-4a2b-9c5e-05ff5f6bbbe2", 00:17:18.956 "strip_size_kb": 64, 00:17:18.956 "state": "online", 00:17:18.956 "raid_level": "raid5f", 00:17:18.956 "superblock": false, 00:17:18.956 "num_base_bdevs": 3, 00:17:18.956 "num_base_bdevs_discovered": 2, 00:17:18.956 "num_base_bdevs_operational": 2, 00:17:18.956 "base_bdevs_list": [ 00:17:18.956 { 00:17:18.956 "name": null, 00:17:18.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.956 "is_configured": false, 00:17:18.956 "data_offset": 0, 00:17:18.956 "data_size": 65536 00:17:18.956 }, 00:17:18.956 { 00:17:18.956 "name": "BaseBdev2", 00:17:18.956 "uuid": "70912df4-5792-54b4-a940-f6e78588a27e", 00:17:18.956 "is_configured": true, 00:17:18.956 "data_offset": 0, 00:17:18.956 "data_size": 65536 00:17:18.956 }, 00:17:18.956 { 00:17:18.956 "name": "BaseBdev3", 00:17:18.956 "uuid": "2b76b5db-44e9-5f83-b016-a7a8bf5e6786", 00:17:18.956 "is_configured": true, 00:17:18.956 "data_offset": 0, 00:17:18.956 "data_size": 65536 00:17:18.956 } 00:17:18.956 ] 00:17:18.956 }' 00:17:18.956 16:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.956 16:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:18.957 16:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.957 16:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:18.957 16:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:18.957 16:31:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.957 16:31:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.957 [2024-11-05 16:31:31.898318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:18.957 [2024-11-05 16:31:31.916120] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:17:18.957 16:31:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.957 16:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:18.957 [2024-11-05 16:31:31.924555] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:19.895 16:31:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:19.895 16:31:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.895 16:31:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:19.895 16:31:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:19.895 16:31:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.895 16:31:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.895 16:31:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.895 16:31:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.895 16:31:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.895 16:31:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.154 16:31:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.154 "name": "raid_bdev1", 00:17:20.154 "uuid": "c78e205f-51e9-4a2b-9c5e-05ff5f6bbbe2", 00:17:20.154 "strip_size_kb": 64, 00:17:20.154 "state": "online", 00:17:20.154 "raid_level": "raid5f", 00:17:20.154 "superblock": false, 00:17:20.154 "num_base_bdevs": 3, 00:17:20.154 "num_base_bdevs_discovered": 3, 00:17:20.154 "num_base_bdevs_operational": 3, 00:17:20.154 "process": { 00:17:20.154 "type": "rebuild", 00:17:20.154 "target": "spare", 00:17:20.154 "progress": { 00:17:20.154 "blocks": 20480, 00:17:20.154 "percent": 15 00:17:20.154 } 00:17:20.154 }, 00:17:20.154 "base_bdevs_list": [ 00:17:20.154 { 00:17:20.154 "name": "spare", 00:17:20.154 "uuid": "a5140125-5c79-57ad-8bb6-fabe303487cc", 00:17:20.154 "is_configured": true, 00:17:20.154 "data_offset": 0, 00:17:20.154 "data_size": 65536 00:17:20.154 }, 00:17:20.154 { 00:17:20.154 "name": "BaseBdev2", 00:17:20.154 "uuid": "70912df4-5792-54b4-a940-f6e78588a27e", 00:17:20.154 "is_configured": true, 00:17:20.154 "data_offset": 0, 00:17:20.154 "data_size": 65536 00:17:20.154 }, 00:17:20.154 { 00:17:20.154 "name": "BaseBdev3", 00:17:20.154 "uuid": "2b76b5db-44e9-5f83-b016-a7a8bf5e6786", 00:17:20.154 "is_configured": true, 00:17:20.154 "data_offset": 0, 00:17:20.155 "data_size": 65536 00:17:20.155 } 00:17:20.155 ] 00:17:20.155 }' 00:17:20.155 16:31:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.155 16:31:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:20.155 16:31:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.155 16:31:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:20.155 16:31:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:20.155 16:31:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:17:20.155 16:31:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:20.155 16:31:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=565 00:17:20.155 16:31:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:20.155 16:31:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:20.155 16:31:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.155 16:31:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:20.155 16:31:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:20.155 16:31:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.155 16:31:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.155 16:31:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.155 16:31:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.155 16:31:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.155 16:31:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.155 16:31:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.155 "name": "raid_bdev1", 00:17:20.155 "uuid": "c78e205f-51e9-4a2b-9c5e-05ff5f6bbbe2", 00:17:20.155 "strip_size_kb": 64, 00:17:20.155 "state": "online", 00:17:20.155 "raid_level": "raid5f", 00:17:20.155 "superblock": false, 00:17:20.155 "num_base_bdevs": 3, 00:17:20.155 "num_base_bdevs_discovered": 3, 00:17:20.155 "num_base_bdevs_operational": 3, 00:17:20.155 "process": { 00:17:20.155 "type": "rebuild", 00:17:20.155 "target": "spare", 00:17:20.155 "progress": { 00:17:20.155 "blocks": 22528, 00:17:20.155 "percent": 17 00:17:20.155 } 00:17:20.155 }, 00:17:20.155 "base_bdevs_list": [ 00:17:20.155 { 00:17:20.155 "name": "spare", 00:17:20.155 "uuid": "a5140125-5c79-57ad-8bb6-fabe303487cc", 00:17:20.155 "is_configured": true, 00:17:20.155 "data_offset": 0, 00:17:20.155 "data_size": 65536 00:17:20.155 }, 00:17:20.155 { 00:17:20.155 "name": "BaseBdev2", 00:17:20.155 "uuid": "70912df4-5792-54b4-a940-f6e78588a27e", 00:17:20.155 "is_configured": true, 00:17:20.155 "data_offset": 0, 00:17:20.155 "data_size": 65536 00:17:20.155 }, 00:17:20.155 { 00:17:20.155 "name": "BaseBdev3", 00:17:20.155 "uuid": "2b76b5db-44e9-5f83-b016-a7a8bf5e6786", 00:17:20.155 "is_configured": true, 00:17:20.155 "data_offset": 0, 00:17:20.155 "data_size": 65536 00:17:20.155 } 00:17:20.155 ] 00:17:20.155 }' 00:17:20.155 16:31:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.155 16:31:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:20.155 16:31:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.155 16:31:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:20.155 16:31:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:21.553 16:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:21.553 16:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:21.553 16:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.553 16:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:21.553 16:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:21.553 16:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.553 16:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.553 16:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.553 16:31:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.553 16:31:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.553 16:31:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.553 16:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.553 "name": "raid_bdev1", 00:17:21.553 "uuid": "c78e205f-51e9-4a2b-9c5e-05ff5f6bbbe2", 00:17:21.553 "strip_size_kb": 64, 00:17:21.553 "state": "online", 00:17:21.553 "raid_level": "raid5f", 00:17:21.553 "superblock": false, 00:17:21.553 "num_base_bdevs": 3, 00:17:21.553 "num_base_bdevs_discovered": 3, 00:17:21.553 "num_base_bdevs_operational": 3, 00:17:21.553 "process": { 00:17:21.553 "type": "rebuild", 00:17:21.553 "target": "spare", 00:17:21.553 "progress": { 00:17:21.553 "blocks": 45056, 00:17:21.553 "percent": 34 00:17:21.553 } 00:17:21.553 }, 00:17:21.553 "base_bdevs_list": [ 00:17:21.553 { 00:17:21.553 "name": "spare", 00:17:21.553 "uuid": "a5140125-5c79-57ad-8bb6-fabe303487cc", 00:17:21.553 "is_configured": true, 00:17:21.553 "data_offset": 0, 00:17:21.553 "data_size": 65536 00:17:21.553 }, 00:17:21.553 { 00:17:21.553 "name": "BaseBdev2", 00:17:21.553 "uuid": "70912df4-5792-54b4-a940-f6e78588a27e", 00:17:21.553 "is_configured": true, 00:17:21.553 "data_offset": 0, 00:17:21.553 "data_size": 65536 00:17:21.553 }, 00:17:21.553 { 00:17:21.553 "name": "BaseBdev3", 00:17:21.553 "uuid": "2b76b5db-44e9-5f83-b016-a7a8bf5e6786", 00:17:21.553 "is_configured": true, 00:17:21.553 "data_offset": 0, 00:17:21.553 "data_size": 65536 00:17:21.553 } 00:17:21.553 ] 00:17:21.553 }' 00:17:21.553 16:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.553 16:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:21.553 16:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.553 16:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:21.553 16:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:22.491 16:31:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:22.491 16:31:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:22.491 16:31:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.491 16:31:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:22.491 16:31:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:22.491 16:31:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.491 16:31:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.491 16:31:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.491 16:31:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.491 16:31:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.491 16:31:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.491 16:31:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.491 "name": "raid_bdev1", 00:17:22.491 "uuid": "c78e205f-51e9-4a2b-9c5e-05ff5f6bbbe2", 00:17:22.491 "strip_size_kb": 64, 00:17:22.491 "state": "online", 00:17:22.491 "raid_level": "raid5f", 00:17:22.491 "superblock": false, 00:17:22.491 "num_base_bdevs": 3, 00:17:22.491 "num_base_bdevs_discovered": 3, 00:17:22.491 "num_base_bdevs_operational": 3, 00:17:22.491 "process": { 00:17:22.491 "type": "rebuild", 00:17:22.491 "target": "spare", 00:17:22.491 "progress": { 00:17:22.491 "blocks": 69632, 00:17:22.491 "percent": 53 00:17:22.491 } 00:17:22.491 }, 00:17:22.491 "base_bdevs_list": [ 00:17:22.491 { 00:17:22.491 "name": "spare", 00:17:22.491 "uuid": "a5140125-5c79-57ad-8bb6-fabe303487cc", 00:17:22.491 "is_configured": true, 00:17:22.491 "data_offset": 0, 00:17:22.491 "data_size": 65536 00:17:22.491 }, 00:17:22.491 { 00:17:22.491 "name": "BaseBdev2", 00:17:22.491 "uuid": "70912df4-5792-54b4-a940-f6e78588a27e", 00:17:22.491 "is_configured": true, 00:17:22.491 "data_offset": 0, 00:17:22.491 "data_size": 65536 00:17:22.491 }, 00:17:22.491 { 00:17:22.491 "name": "BaseBdev3", 00:17:22.491 "uuid": "2b76b5db-44e9-5f83-b016-a7a8bf5e6786", 00:17:22.491 "is_configured": true, 00:17:22.491 "data_offset": 0, 00:17:22.491 "data_size": 65536 00:17:22.491 } 00:17:22.491 ] 00:17:22.491 }' 00:17:22.491 16:31:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.491 16:31:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:22.491 16:31:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.491 16:31:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:22.491 16:31:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:23.869 16:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:23.869 16:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:23.869 16:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:23.869 16:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:23.869 16:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:23.869 16:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:23.869 16:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.869 16:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.869 16:31:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.869 16:31:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.869 16:31:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.869 16:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:23.869 "name": "raid_bdev1", 00:17:23.869 "uuid": "c78e205f-51e9-4a2b-9c5e-05ff5f6bbbe2", 00:17:23.869 "strip_size_kb": 64, 00:17:23.869 "state": "online", 00:17:23.869 "raid_level": "raid5f", 00:17:23.869 "superblock": false, 00:17:23.869 "num_base_bdevs": 3, 00:17:23.869 "num_base_bdevs_discovered": 3, 00:17:23.869 "num_base_bdevs_operational": 3, 00:17:23.869 "process": { 00:17:23.869 "type": "rebuild", 00:17:23.869 "target": "spare", 00:17:23.869 "progress": { 00:17:23.869 "blocks": 92160, 00:17:23.869 "percent": 70 00:17:23.869 } 00:17:23.869 }, 00:17:23.869 "base_bdevs_list": [ 00:17:23.869 { 00:17:23.869 "name": "spare", 00:17:23.869 "uuid": "a5140125-5c79-57ad-8bb6-fabe303487cc", 00:17:23.869 "is_configured": true, 00:17:23.869 "data_offset": 0, 00:17:23.869 "data_size": 65536 00:17:23.869 }, 00:17:23.869 { 00:17:23.869 "name": "BaseBdev2", 00:17:23.869 "uuid": "70912df4-5792-54b4-a940-f6e78588a27e", 00:17:23.869 "is_configured": true, 00:17:23.869 "data_offset": 0, 00:17:23.869 "data_size": 65536 00:17:23.869 }, 00:17:23.869 { 00:17:23.869 "name": "BaseBdev3", 00:17:23.869 "uuid": "2b76b5db-44e9-5f83-b016-a7a8bf5e6786", 00:17:23.869 "is_configured": true, 00:17:23.869 "data_offset": 0, 00:17:23.869 "data_size": 65536 00:17:23.869 } 00:17:23.869 ] 00:17:23.869 }' 00:17:23.869 16:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:23.869 16:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:23.869 16:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:23.869 16:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:23.869 16:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:24.807 16:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:24.807 16:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:24.807 16:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:24.807 16:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:24.807 16:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:24.807 16:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:24.807 16:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.807 16:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.807 16:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.807 16:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.807 16:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.807 16:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:24.807 "name": "raid_bdev1", 00:17:24.807 "uuid": "c78e205f-51e9-4a2b-9c5e-05ff5f6bbbe2", 00:17:24.807 "strip_size_kb": 64, 00:17:24.807 "state": "online", 00:17:24.807 "raid_level": "raid5f", 00:17:24.807 "superblock": false, 00:17:24.807 "num_base_bdevs": 3, 00:17:24.807 "num_base_bdevs_discovered": 3, 00:17:24.807 "num_base_bdevs_operational": 3, 00:17:24.807 "process": { 00:17:24.807 "type": "rebuild", 00:17:24.807 "target": "spare", 00:17:24.807 "progress": { 00:17:24.807 "blocks": 114688, 00:17:24.807 "percent": 87 00:17:24.807 } 00:17:24.807 }, 00:17:24.807 "base_bdevs_list": [ 00:17:24.807 { 00:17:24.807 "name": "spare", 00:17:24.807 "uuid": "a5140125-5c79-57ad-8bb6-fabe303487cc", 00:17:24.807 "is_configured": true, 00:17:24.807 "data_offset": 0, 00:17:24.807 "data_size": 65536 00:17:24.807 }, 00:17:24.807 { 00:17:24.807 "name": "BaseBdev2", 00:17:24.807 "uuid": "70912df4-5792-54b4-a940-f6e78588a27e", 00:17:24.807 "is_configured": true, 00:17:24.807 "data_offset": 0, 00:17:24.807 "data_size": 65536 00:17:24.807 }, 00:17:24.807 { 00:17:24.807 "name": "BaseBdev3", 00:17:24.807 "uuid": "2b76b5db-44e9-5f83-b016-a7a8bf5e6786", 00:17:24.807 "is_configured": true, 00:17:24.807 "data_offset": 0, 00:17:24.807 "data_size": 65536 00:17:24.807 } 00:17:24.807 ] 00:17:24.807 }' 00:17:24.807 16:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:24.807 16:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:24.807 16:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:24.807 16:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:24.807 16:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:25.376 [2024-11-05 16:31:38.399713] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:25.376 [2024-11-05 16:31:38.399964] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:25.376 [2024-11-05 16:31:38.400052] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.946 16:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:25.946 16:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:25.946 16:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:25.946 16:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:25.946 16:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:25.946 16:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:25.946 16:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.947 16:31:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.947 16:31:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.947 16:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.947 16:31:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.947 16:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:25.947 "name": "raid_bdev1", 00:17:25.947 "uuid": "c78e205f-51e9-4a2b-9c5e-05ff5f6bbbe2", 00:17:25.947 "strip_size_kb": 64, 00:17:25.947 "state": "online", 00:17:25.947 "raid_level": "raid5f", 00:17:25.947 "superblock": false, 00:17:25.947 "num_base_bdevs": 3, 00:17:25.947 "num_base_bdevs_discovered": 3, 00:17:25.947 "num_base_bdevs_operational": 3, 00:17:25.947 "base_bdevs_list": [ 00:17:25.947 { 00:17:25.947 "name": "spare", 00:17:25.947 "uuid": "a5140125-5c79-57ad-8bb6-fabe303487cc", 00:17:25.947 "is_configured": true, 00:17:25.947 "data_offset": 0, 00:17:25.947 "data_size": 65536 00:17:25.947 }, 00:17:25.947 { 00:17:25.947 "name": "BaseBdev2", 00:17:25.947 "uuid": "70912df4-5792-54b4-a940-f6e78588a27e", 00:17:25.947 "is_configured": true, 00:17:25.947 "data_offset": 0, 00:17:25.947 "data_size": 65536 00:17:25.947 }, 00:17:25.947 { 00:17:25.947 "name": "BaseBdev3", 00:17:25.947 "uuid": "2b76b5db-44e9-5f83-b016-a7a8bf5e6786", 00:17:25.947 "is_configured": true, 00:17:25.947 "data_offset": 0, 00:17:25.947 "data_size": 65536 00:17:25.947 } 00:17:25.947 ] 00:17:25.947 }' 00:17:25.947 16:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:25.947 16:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:25.947 16:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:25.947 16:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:25.947 16:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:25.947 16:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:25.947 16:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:25.947 16:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:25.947 16:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:25.947 16:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:25.947 16:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.947 16:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.947 16:31:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.947 16:31:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.947 16:31:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.947 16:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:25.947 "name": "raid_bdev1", 00:17:25.947 "uuid": "c78e205f-51e9-4a2b-9c5e-05ff5f6bbbe2", 00:17:25.947 "strip_size_kb": 64, 00:17:25.947 "state": "online", 00:17:25.947 "raid_level": "raid5f", 00:17:25.947 "superblock": false, 00:17:25.947 "num_base_bdevs": 3, 00:17:25.947 "num_base_bdevs_discovered": 3, 00:17:25.947 "num_base_bdevs_operational": 3, 00:17:25.947 "base_bdevs_list": [ 00:17:25.947 { 00:17:25.947 "name": "spare", 00:17:25.947 "uuid": "a5140125-5c79-57ad-8bb6-fabe303487cc", 00:17:25.947 "is_configured": true, 00:17:25.947 "data_offset": 0, 00:17:25.947 "data_size": 65536 00:17:25.947 }, 00:17:25.947 { 00:17:25.947 "name": "BaseBdev2", 00:17:25.947 "uuid": "70912df4-5792-54b4-a940-f6e78588a27e", 00:17:25.947 "is_configured": true, 00:17:25.947 "data_offset": 0, 00:17:25.947 "data_size": 65536 00:17:25.947 }, 00:17:25.947 { 00:17:25.947 "name": "BaseBdev3", 00:17:25.947 "uuid": "2b76b5db-44e9-5f83-b016-a7a8bf5e6786", 00:17:25.947 "is_configured": true, 00:17:25.947 "data_offset": 0, 00:17:25.947 "data_size": 65536 00:17:25.947 } 00:17:25.947 ] 00:17:25.947 }' 00:17:25.947 16:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:25.947 16:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:26.206 16:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.206 16:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:26.206 16:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:26.206 16:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:26.206 16:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.206 16:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:26.206 16:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:26.206 16:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:26.206 16:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.206 16:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.206 16:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.206 16:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.206 16:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.206 16:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.206 16:31:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.206 16:31:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.206 16:31:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.207 16:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.207 "name": "raid_bdev1", 00:17:26.207 "uuid": "c78e205f-51e9-4a2b-9c5e-05ff5f6bbbe2", 00:17:26.207 "strip_size_kb": 64, 00:17:26.207 "state": "online", 00:17:26.207 "raid_level": "raid5f", 00:17:26.207 "superblock": false, 00:17:26.207 "num_base_bdevs": 3, 00:17:26.207 "num_base_bdevs_discovered": 3, 00:17:26.207 "num_base_bdevs_operational": 3, 00:17:26.207 "base_bdevs_list": [ 00:17:26.207 { 00:17:26.207 "name": "spare", 00:17:26.207 "uuid": "a5140125-5c79-57ad-8bb6-fabe303487cc", 00:17:26.207 "is_configured": true, 00:17:26.207 "data_offset": 0, 00:17:26.207 "data_size": 65536 00:17:26.207 }, 00:17:26.207 { 00:17:26.207 "name": "BaseBdev2", 00:17:26.207 "uuid": "70912df4-5792-54b4-a940-f6e78588a27e", 00:17:26.207 "is_configured": true, 00:17:26.207 "data_offset": 0, 00:17:26.207 "data_size": 65536 00:17:26.207 }, 00:17:26.207 { 00:17:26.207 "name": "BaseBdev3", 00:17:26.207 "uuid": "2b76b5db-44e9-5f83-b016-a7a8bf5e6786", 00:17:26.207 "is_configured": true, 00:17:26.207 "data_offset": 0, 00:17:26.207 "data_size": 65536 00:17:26.207 } 00:17:26.207 ] 00:17:26.207 }' 00:17:26.207 16:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.207 16:31:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.861 16:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:26.861 16:31:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.861 16:31:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.861 [2024-11-05 16:31:39.567036] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:26.861 [2024-11-05 16:31:39.567186] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:26.861 [2024-11-05 16:31:39.567366] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:26.861 [2024-11-05 16:31:39.567541] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:26.861 [2024-11-05 16:31:39.567619] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:26.861 16:31:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.861 16:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.861 16:31:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.861 16:31:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.861 16:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:26.861 16:31:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.861 16:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:26.861 16:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:26.861 16:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:26.861 16:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:26.861 16:31:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:26.861 16:31:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:26.861 16:31:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:26.861 16:31:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:26.861 16:31:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:26.861 16:31:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:26.861 16:31:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:26.861 16:31:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:26.861 16:31:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:26.861 /dev/nbd0 00:17:26.861 16:31:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:26.861 16:31:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:26.861 16:31:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:26.861 16:31:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:17:26.861 16:31:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:26.861 16:31:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:26.861 16:31:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:26.861 16:31:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:17:26.861 16:31:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:26.861 16:31:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:26.861 16:31:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:26.861 1+0 records in 00:17:26.861 1+0 records out 00:17:26.861 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296583 s, 13.8 MB/s 00:17:26.861 16:31:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:26.861 16:31:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:17:26.861 16:31:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:26.861 16:31:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:26.861 16:31:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:17:26.861 16:31:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:26.861 16:31:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:26.861 16:31:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:27.118 /dev/nbd1 00:17:27.118 16:31:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:27.118 16:31:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:27.118 16:31:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:17:27.118 16:31:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:17:27.118 16:31:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:27.118 16:31:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:27.118 16:31:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:17:27.118 16:31:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:17:27.118 16:31:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:27.118 16:31:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:27.118 16:31:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:27.118 1+0 records in 00:17:27.118 1+0 records out 00:17:27.118 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288434 s, 14.2 MB/s 00:17:27.118 16:31:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:27.118 16:31:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:17:27.118 16:31:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:27.118 16:31:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:27.118 16:31:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:17:27.118 16:31:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:27.118 16:31:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:27.118 16:31:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:27.376 16:31:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:27.376 16:31:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:27.376 16:31:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:27.376 16:31:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:27.376 16:31:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:27.376 16:31:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:27.376 16:31:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:27.634 16:31:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:27.634 16:31:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:27.634 16:31:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:27.634 16:31:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:27.634 16:31:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:27.634 16:31:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:27.634 16:31:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:27.634 16:31:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:27.634 16:31:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:27.634 16:31:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:27.893 16:31:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:27.893 16:31:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:27.893 16:31:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:27.893 16:31:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:27.893 16:31:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:27.893 16:31:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:27.893 16:31:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:27.893 16:31:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:27.893 16:31:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:27.893 16:31:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81918 00:17:27.893 16:31:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 81918 ']' 00:17:27.893 16:31:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 81918 00:17:27.893 16:31:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:17:27.893 16:31:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:27.893 16:31:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81918 00:17:27.893 killing process with pid 81918 00:17:27.893 Received shutdown signal, test time was about 60.000000 seconds 00:17:27.893 00:17:27.893 Latency(us) 00:17:27.893 [2024-11-05T16:31:40.981Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:27.893 [2024-11-05T16:31:40.981Z] =================================================================================================================== 00:17:27.893 [2024-11-05T16:31:40.981Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:27.893 16:31:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:27.893 16:31:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:27.893 16:31:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81918' 00:17:27.893 16:31:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # kill 81918 00:17:27.893 16:31:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@976 -- # wait 81918 00:17:27.893 [2024-11-05 16:31:40.936263] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:28.461 [2024-11-05 16:31:41.383310] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:29.840 16:31:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:29.840 00:17:29.840 real 0m15.817s 00:17:29.840 user 0m19.472s 00:17:29.840 sys 0m2.119s 00:17:29.840 ************************************ 00:17:29.840 END TEST raid5f_rebuild_test 00:17:29.840 ************************************ 00:17:29.840 16:31:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:29.840 16:31:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.840 16:31:42 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:17:29.840 16:31:42 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:17:29.840 16:31:42 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:29.840 16:31:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:29.840 ************************************ 00:17:29.840 START TEST raid5f_rebuild_test_sb 00:17:29.840 ************************************ 00:17:29.840 16:31:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 3 true false true 00:17:29.840 16:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:29.840 16:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:17:29.840 16:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:29.840 16:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:29.840 16:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:29.840 16:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:29.841 16:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:29.841 16:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:29.841 16:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:29.841 16:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:29.841 16:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:29.841 16:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:29.841 16:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:29.841 16:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:29.841 16:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:29.841 16:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:29.841 16:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:29.841 16:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:29.841 16:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:29.841 16:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:29.841 16:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:29.841 16:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:29.841 16:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:29.841 16:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:29.841 16:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:29.841 16:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:29.841 16:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:29.841 16:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:29.841 16:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:29.841 16:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82369 00:17:29.841 16:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:29.841 16:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82369 00:17:29.841 16:31:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 82369 ']' 00:17:29.841 16:31:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.841 16:31:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:29.841 16:31:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.841 16:31:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:29.841 16:31:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.841 [2024-11-05 16:31:42.753703] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:17:29.841 [2024-11-05 16:31:42.753936] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82369 ] 00:17:29.841 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:29.841 Zero copy mechanism will not be used. 00:17:29.841 [2024-11-05 16:31:42.928673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.099 [2024-11-05 16:31:43.062691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.358 [2024-11-05 16:31:43.290613] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:30.358 [2024-11-05 16:31:43.290772] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:30.618 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:30.618 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:17:30.618 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:30.618 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:30.618 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.618 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.618 BaseBdev1_malloc 00:17:30.618 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.618 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:30.618 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.618 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.618 [2024-11-05 16:31:43.687028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:30.618 [2024-11-05 16:31:43.687108] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.618 [2024-11-05 16:31:43.687134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:30.618 [2024-11-05 16:31:43.687148] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.618 [2024-11-05 16:31:43.689666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.618 [2024-11-05 16:31:43.689704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:30.618 BaseBdev1 00:17:30.618 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.618 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:30.618 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:30.618 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.618 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.877 BaseBdev2_malloc 00:17:30.877 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.877 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:30.877 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.877 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.877 [2024-11-05 16:31:43.746419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:30.877 [2024-11-05 16:31:43.746483] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.877 [2024-11-05 16:31:43.746503] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:30.877 [2024-11-05 16:31:43.746515] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.877 [2024-11-05 16:31:43.748820] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.877 [2024-11-05 16:31:43.748910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:30.877 BaseBdev2 00:17:30.877 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.877 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:30.877 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:30.877 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.877 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.877 BaseBdev3_malloc 00:17:30.877 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.877 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:30.877 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.877 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.877 [2024-11-05 16:31:43.816654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:30.877 [2024-11-05 16:31:43.816717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.877 [2024-11-05 16:31:43.816741] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:30.877 [2024-11-05 16:31:43.816753] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.877 [2024-11-05 16:31:43.819057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.877 [2024-11-05 16:31:43.819110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:30.877 BaseBdev3 00:17:30.877 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.877 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:30.877 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.877 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.877 spare_malloc 00:17:30.877 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.877 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:30.877 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.877 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.877 spare_delay 00:17:30.877 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.877 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:30.877 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.877 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.877 [2024-11-05 16:31:43.887225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:30.877 [2024-11-05 16:31:43.887281] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.877 [2024-11-05 16:31:43.887300] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:30.877 [2024-11-05 16:31:43.887311] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.877 [2024-11-05 16:31:43.889742] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.877 [2024-11-05 16:31:43.889837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:30.877 spare 00:17:30.877 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.877 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:17:30.877 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.877 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.877 [2024-11-05 16:31:43.899320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:30.877 [2024-11-05 16:31:43.901387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:30.877 [2024-11-05 16:31:43.901529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:30.877 [2024-11-05 16:31:43.901741] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:30.877 [2024-11-05 16:31:43.901759] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:30.877 [2024-11-05 16:31:43.902051] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:30.877 [2024-11-05 16:31:43.908824] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:30.878 [2024-11-05 16:31:43.908891] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:30.878 [2024-11-05 16:31:43.909173] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.878 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.878 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:30.878 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:30.878 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:30.878 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:30.878 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:30.878 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:30.878 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.878 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.878 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.878 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.878 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.878 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.878 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.878 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.878 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.878 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.878 "name": "raid_bdev1", 00:17:30.878 "uuid": "c445e848-5d28-4b94-aa8d-4cd211d57e8d", 00:17:30.878 "strip_size_kb": 64, 00:17:30.878 "state": "online", 00:17:30.878 "raid_level": "raid5f", 00:17:30.878 "superblock": true, 00:17:30.878 "num_base_bdevs": 3, 00:17:30.878 "num_base_bdevs_discovered": 3, 00:17:30.878 "num_base_bdevs_operational": 3, 00:17:30.878 "base_bdevs_list": [ 00:17:30.878 { 00:17:30.878 "name": "BaseBdev1", 00:17:30.878 "uuid": "92db8989-cba0-55ec-8aa5-b05a7902b5d2", 00:17:30.878 "is_configured": true, 00:17:30.878 "data_offset": 2048, 00:17:30.878 "data_size": 63488 00:17:30.878 }, 00:17:30.878 { 00:17:30.878 "name": "BaseBdev2", 00:17:30.878 "uuid": "ad1e58e2-8dfd-5cb8-8a85-05176cc1694e", 00:17:30.878 "is_configured": true, 00:17:30.878 "data_offset": 2048, 00:17:30.878 "data_size": 63488 00:17:30.878 }, 00:17:30.878 { 00:17:30.878 "name": "BaseBdev3", 00:17:30.878 "uuid": "190fb495-c5cf-548f-bb00-6d1bc6cae248", 00:17:30.878 "is_configured": true, 00:17:30.878 "data_offset": 2048, 00:17:30.878 "data_size": 63488 00:17:30.878 } 00:17:30.878 ] 00:17:30.878 }' 00:17:30.878 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.878 16:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.446 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:31.446 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:31.446 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.446 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.446 [2024-11-05 16:31:44.384357] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:31.446 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.446 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:17:31.446 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.446 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.446 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.446 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:31.446 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.446 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:31.446 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:31.446 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:31.446 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:31.446 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:31.446 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:31.446 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:31.446 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:31.446 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:31.446 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:31.446 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:31.446 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:31.446 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:31.446 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:31.705 [2024-11-05 16:31:44.679691] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:31.705 /dev/nbd0 00:17:31.705 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:31.705 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:31.705 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:31.705 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:17:31.705 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:31.705 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:31.705 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:31.705 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:17:31.705 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:31.705 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:31.705 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:31.705 1+0 records in 00:17:31.705 1+0 records out 00:17:31.705 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000549961 s, 7.4 MB/s 00:17:31.705 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:31.705 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:17:31.705 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:31.705 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:31.705 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:17:31.705 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:31.705 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:31.705 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:31.705 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:17:31.705 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:17:31.706 16:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:17:32.293 496+0 records in 00:17:32.293 496+0 records out 00:17:32.293 65011712 bytes (65 MB, 62 MiB) copied, 0.411129 s, 158 MB/s 00:17:32.293 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:32.293 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:32.293 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:32.293 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:32.293 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:32.293 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:32.293 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:32.579 [2024-11-05 16:31:45.412656] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:32.579 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:32.579 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:32.579 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:32.579 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:32.579 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:32.579 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:32.579 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:32.579 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:32.579 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:32.579 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.579 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.579 [2024-11-05 16:31:45.446156] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:32.579 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.579 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:32.579 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:32.579 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.579 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:32.579 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:32.579 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:32.579 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.579 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.579 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.579 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.579 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.579 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.579 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.579 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.579 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.579 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.579 "name": "raid_bdev1", 00:17:32.579 "uuid": "c445e848-5d28-4b94-aa8d-4cd211d57e8d", 00:17:32.579 "strip_size_kb": 64, 00:17:32.579 "state": "online", 00:17:32.579 "raid_level": "raid5f", 00:17:32.579 "superblock": true, 00:17:32.579 "num_base_bdevs": 3, 00:17:32.579 "num_base_bdevs_discovered": 2, 00:17:32.579 "num_base_bdevs_operational": 2, 00:17:32.579 "base_bdevs_list": [ 00:17:32.579 { 00:17:32.579 "name": null, 00:17:32.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.580 "is_configured": false, 00:17:32.580 "data_offset": 0, 00:17:32.580 "data_size": 63488 00:17:32.580 }, 00:17:32.580 { 00:17:32.580 "name": "BaseBdev2", 00:17:32.580 "uuid": "ad1e58e2-8dfd-5cb8-8a85-05176cc1694e", 00:17:32.580 "is_configured": true, 00:17:32.580 "data_offset": 2048, 00:17:32.580 "data_size": 63488 00:17:32.580 }, 00:17:32.580 { 00:17:32.580 "name": "BaseBdev3", 00:17:32.580 "uuid": "190fb495-c5cf-548f-bb00-6d1bc6cae248", 00:17:32.580 "is_configured": true, 00:17:32.580 "data_offset": 2048, 00:17:32.580 "data_size": 63488 00:17:32.580 } 00:17:32.580 ] 00:17:32.580 }' 00:17:32.580 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.580 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.838 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:32.838 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.838 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.838 [2024-11-05 16:31:45.873471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:32.838 [2024-11-05 16:31:45.893827] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:17:32.838 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.838 16:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:32.838 [2024-11-05 16:31:45.903601] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:34.217 16:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:34.217 16:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:34.217 16:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:34.217 16:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:34.217 16:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:34.217 16:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.217 16:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.217 16:31:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.217 16:31:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.217 16:31:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.217 16:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:34.217 "name": "raid_bdev1", 00:17:34.217 "uuid": "c445e848-5d28-4b94-aa8d-4cd211d57e8d", 00:17:34.217 "strip_size_kb": 64, 00:17:34.217 "state": "online", 00:17:34.217 "raid_level": "raid5f", 00:17:34.217 "superblock": true, 00:17:34.217 "num_base_bdevs": 3, 00:17:34.217 "num_base_bdevs_discovered": 3, 00:17:34.217 "num_base_bdevs_operational": 3, 00:17:34.217 "process": { 00:17:34.218 "type": "rebuild", 00:17:34.218 "target": "spare", 00:17:34.218 "progress": { 00:17:34.218 "blocks": 18432, 00:17:34.218 "percent": 14 00:17:34.218 } 00:17:34.218 }, 00:17:34.218 "base_bdevs_list": [ 00:17:34.218 { 00:17:34.218 "name": "spare", 00:17:34.218 "uuid": "209b868e-5866-5431-a116-2e891acf5db6", 00:17:34.218 "is_configured": true, 00:17:34.218 "data_offset": 2048, 00:17:34.218 "data_size": 63488 00:17:34.218 }, 00:17:34.218 { 00:17:34.218 "name": "BaseBdev2", 00:17:34.218 "uuid": "ad1e58e2-8dfd-5cb8-8a85-05176cc1694e", 00:17:34.218 "is_configured": true, 00:17:34.218 "data_offset": 2048, 00:17:34.218 "data_size": 63488 00:17:34.218 }, 00:17:34.218 { 00:17:34.218 "name": "BaseBdev3", 00:17:34.218 "uuid": "190fb495-c5cf-548f-bb00-6d1bc6cae248", 00:17:34.218 "is_configured": true, 00:17:34.218 "data_offset": 2048, 00:17:34.218 "data_size": 63488 00:17:34.218 } 00:17:34.218 ] 00:17:34.218 }' 00:17:34.218 16:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:34.218 16:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:34.218 16:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:34.218 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:34.218 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:34.218 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.218 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.218 [2024-11-05 16:31:47.027979] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:34.218 [2024-11-05 16:31:47.116167] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:34.218 [2024-11-05 16:31:47.116267] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:34.218 [2024-11-05 16:31:47.116295] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:34.218 [2024-11-05 16:31:47.116305] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:34.218 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.218 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:34.218 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.218 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:34.218 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:34.218 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:34.218 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:34.218 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.218 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.218 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.218 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.218 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.218 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.218 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.218 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.218 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.218 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.218 "name": "raid_bdev1", 00:17:34.218 "uuid": "c445e848-5d28-4b94-aa8d-4cd211d57e8d", 00:17:34.218 "strip_size_kb": 64, 00:17:34.218 "state": "online", 00:17:34.218 "raid_level": "raid5f", 00:17:34.218 "superblock": true, 00:17:34.218 "num_base_bdevs": 3, 00:17:34.218 "num_base_bdevs_discovered": 2, 00:17:34.218 "num_base_bdevs_operational": 2, 00:17:34.218 "base_bdevs_list": [ 00:17:34.218 { 00:17:34.218 "name": null, 00:17:34.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.218 "is_configured": false, 00:17:34.218 "data_offset": 0, 00:17:34.218 "data_size": 63488 00:17:34.218 }, 00:17:34.218 { 00:17:34.218 "name": "BaseBdev2", 00:17:34.218 "uuid": "ad1e58e2-8dfd-5cb8-8a85-05176cc1694e", 00:17:34.218 "is_configured": true, 00:17:34.218 "data_offset": 2048, 00:17:34.218 "data_size": 63488 00:17:34.218 }, 00:17:34.218 { 00:17:34.218 "name": "BaseBdev3", 00:17:34.218 "uuid": "190fb495-c5cf-548f-bb00-6d1bc6cae248", 00:17:34.218 "is_configured": true, 00:17:34.218 "data_offset": 2048, 00:17:34.218 "data_size": 63488 00:17:34.218 } 00:17:34.218 ] 00:17:34.218 }' 00:17:34.218 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.218 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.788 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:34.788 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:34.788 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:34.788 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:34.788 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:34.788 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.788 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.788 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.788 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.788 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.788 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:34.788 "name": "raid_bdev1", 00:17:34.788 "uuid": "c445e848-5d28-4b94-aa8d-4cd211d57e8d", 00:17:34.788 "strip_size_kb": 64, 00:17:34.788 "state": "online", 00:17:34.788 "raid_level": "raid5f", 00:17:34.788 "superblock": true, 00:17:34.788 "num_base_bdevs": 3, 00:17:34.788 "num_base_bdevs_discovered": 2, 00:17:34.788 "num_base_bdevs_operational": 2, 00:17:34.788 "base_bdevs_list": [ 00:17:34.788 { 00:17:34.788 "name": null, 00:17:34.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.788 "is_configured": false, 00:17:34.788 "data_offset": 0, 00:17:34.788 "data_size": 63488 00:17:34.788 }, 00:17:34.788 { 00:17:34.788 "name": "BaseBdev2", 00:17:34.788 "uuid": "ad1e58e2-8dfd-5cb8-8a85-05176cc1694e", 00:17:34.788 "is_configured": true, 00:17:34.788 "data_offset": 2048, 00:17:34.788 "data_size": 63488 00:17:34.788 }, 00:17:34.788 { 00:17:34.788 "name": "BaseBdev3", 00:17:34.788 "uuid": "190fb495-c5cf-548f-bb00-6d1bc6cae248", 00:17:34.788 "is_configured": true, 00:17:34.788 "data_offset": 2048, 00:17:34.788 "data_size": 63488 00:17:34.788 } 00:17:34.788 ] 00:17:34.788 }' 00:17:34.788 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:34.788 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:34.788 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:34.788 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:34.788 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:34.788 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.788 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.788 [2024-11-05 16:31:47.777155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:34.788 [2024-11-05 16:31:47.796681] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:17:34.788 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.788 16:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:34.788 [2024-11-05 16:31:47.806506] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:35.726 16:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:35.726 16:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:35.726 16:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:35.726 16:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:35.726 16:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:35.726 16:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.726 16:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.726 16:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.726 16:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.985 16:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.985 16:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:35.985 "name": "raid_bdev1", 00:17:35.985 "uuid": "c445e848-5d28-4b94-aa8d-4cd211d57e8d", 00:17:35.985 "strip_size_kb": 64, 00:17:35.985 "state": "online", 00:17:35.985 "raid_level": "raid5f", 00:17:35.985 "superblock": true, 00:17:35.985 "num_base_bdevs": 3, 00:17:35.985 "num_base_bdevs_discovered": 3, 00:17:35.985 "num_base_bdevs_operational": 3, 00:17:35.985 "process": { 00:17:35.985 "type": "rebuild", 00:17:35.985 "target": "spare", 00:17:35.985 "progress": { 00:17:35.985 "blocks": 18432, 00:17:35.985 "percent": 14 00:17:35.985 } 00:17:35.985 }, 00:17:35.985 "base_bdevs_list": [ 00:17:35.985 { 00:17:35.985 "name": "spare", 00:17:35.985 "uuid": "209b868e-5866-5431-a116-2e891acf5db6", 00:17:35.985 "is_configured": true, 00:17:35.985 "data_offset": 2048, 00:17:35.985 "data_size": 63488 00:17:35.985 }, 00:17:35.985 { 00:17:35.985 "name": "BaseBdev2", 00:17:35.985 "uuid": "ad1e58e2-8dfd-5cb8-8a85-05176cc1694e", 00:17:35.985 "is_configured": true, 00:17:35.985 "data_offset": 2048, 00:17:35.985 "data_size": 63488 00:17:35.985 }, 00:17:35.985 { 00:17:35.985 "name": "BaseBdev3", 00:17:35.985 "uuid": "190fb495-c5cf-548f-bb00-6d1bc6cae248", 00:17:35.985 "is_configured": true, 00:17:35.985 "data_offset": 2048, 00:17:35.985 "data_size": 63488 00:17:35.985 } 00:17:35.985 ] 00:17:35.985 }' 00:17:35.985 16:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:35.985 16:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:35.985 16:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:35.985 16:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:35.985 16:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:35.985 16:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:35.985 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:35.985 16:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:17:35.985 16:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:35.985 16:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=580 00:17:35.985 16:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:35.985 16:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:35.985 16:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:35.985 16:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:35.985 16:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:35.985 16:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:35.985 16:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.985 16:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.985 16:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.985 16:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.985 16:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.985 16:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:35.985 "name": "raid_bdev1", 00:17:35.985 "uuid": "c445e848-5d28-4b94-aa8d-4cd211d57e8d", 00:17:35.985 "strip_size_kb": 64, 00:17:35.985 "state": "online", 00:17:35.985 "raid_level": "raid5f", 00:17:35.985 "superblock": true, 00:17:35.985 "num_base_bdevs": 3, 00:17:35.985 "num_base_bdevs_discovered": 3, 00:17:35.985 "num_base_bdevs_operational": 3, 00:17:35.985 "process": { 00:17:35.985 "type": "rebuild", 00:17:35.985 "target": "spare", 00:17:35.985 "progress": { 00:17:35.985 "blocks": 22528, 00:17:35.985 "percent": 17 00:17:35.985 } 00:17:35.985 }, 00:17:35.985 "base_bdevs_list": [ 00:17:35.985 { 00:17:35.985 "name": "spare", 00:17:35.985 "uuid": "209b868e-5866-5431-a116-2e891acf5db6", 00:17:35.985 "is_configured": true, 00:17:35.985 "data_offset": 2048, 00:17:35.985 "data_size": 63488 00:17:35.985 }, 00:17:35.985 { 00:17:35.985 "name": "BaseBdev2", 00:17:35.985 "uuid": "ad1e58e2-8dfd-5cb8-8a85-05176cc1694e", 00:17:35.985 "is_configured": true, 00:17:35.985 "data_offset": 2048, 00:17:35.985 "data_size": 63488 00:17:35.985 }, 00:17:35.985 { 00:17:35.985 "name": "BaseBdev3", 00:17:35.985 "uuid": "190fb495-c5cf-548f-bb00-6d1bc6cae248", 00:17:35.985 "is_configured": true, 00:17:35.985 "data_offset": 2048, 00:17:35.985 "data_size": 63488 00:17:35.985 } 00:17:35.985 ] 00:17:35.985 }' 00:17:35.985 16:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:35.985 16:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:35.985 16:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.244 16:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:36.244 16:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:37.180 16:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:37.180 16:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:37.180 16:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.180 16:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:37.180 16:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:37.180 16:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.180 16:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.180 16:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.180 16:31:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.180 16:31:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.180 16:31:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.180 16:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:37.180 "name": "raid_bdev1", 00:17:37.180 "uuid": "c445e848-5d28-4b94-aa8d-4cd211d57e8d", 00:17:37.180 "strip_size_kb": 64, 00:17:37.180 "state": "online", 00:17:37.180 "raid_level": "raid5f", 00:17:37.180 "superblock": true, 00:17:37.180 "num_base_bdevs": 3, 00:17:37.180 "num_base_bdevs_discovered": 3, 00:17:37.180 "num_base_bdevs_operational": 3, 00:17:37.180 "process": { 00:17:37.180 "type": "rebuild", 00:17:37.180 "target": "spare", 00:17:37.180 "progress": { 00:17:37.180 "blocks": 45056, 00:17:37.180 "percent": 35 00:17:37.180 } 00:17:37.180 }, 00:17:37.180 "base_bdevs_list": [ 00:17:37.180 { 00:17:37.180 "name": "spare", 00:17:37.180 "uuid": "209b868e-5866-5431-a116-2e891acf5db6", 00:17:37.180 "is_configured": true, 00:17:37.180 "data_offset": 2048, 00:17:37.180 "data_size": 63488 00:17:37.180 }, 00:17:37.180 { 00:17:37.180 "name": "BaseBdev2", 00:17:37.180 "uuid": "ad1e58e2-8dfd-5cb8-8a85-05176cc1694e", 00:17:37.180 "is_configured": true, 00:17:37.180 "data_offset": 2048, 00:17:37.180 "data_size": 63488 00:17:37.180 }, 00:17:37.180 { 00:17:37.180 "name": "BaseBdev3", 00:17:37.180 "uuid": "190fb495-c5cf-548f-bb00-6d1bc6cae248", 00:17:37.180 "is_configured": true, 00:17:37.180 "data_offset": 2048, 00:17:37.180 "data_size": 63488 00:17:37.180 } 00:17:37.180 ] 00:17:37.180 }' 00:17:37.180 16:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:37.180 16:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:37.180 16:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:37.180 16:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:37.180 16:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:38.556 16:31:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:38.556 16:31:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:38.556 16:31:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.556 16:31:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:38.556 16:31:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:38.556 16:31:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.556 16:31:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.556 16:31:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.556 16:31:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.556 16:31:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.556 16:31:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.556 16:31:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.556 "name": "raid_bdev1", 00:17:38.556 "uuid": "c445e848-5d28-4b94-aa8d-4cd211d57e8d", 00:17:38.556 "strip_size_kb": 64, 00:17:38.556 "state": "online", 00:17:38.556 "raid_level": "raid5f", 00:17:38.556 "superblock": true, 00:17:38.557 "num_base_bdevs": 3, 00:17:38.557 "num_base_bdevs_discovered": 3, 00:17:38.557 "num_base_bdevs_operational": 3, 00:17:38.557 "process": { 00:17:38.557 "type": "rebuild", 00:17:38.557 "target": "spare", 00:17:38.557 "progress": { 00:17:38.557 "blocks": 69632, 00:17:38.557 "percent": 54 00:17:38.557 } 00:17:38.557 }, 00:17:38.557 "base_bdevs_list": [ 00:17:38.557 { 00:17:38.557 "name": "spare", 00:17:38.557 "uuid": "209b868e-5866-5431-a116-2e891acf5db6", 00:17:38.557 "is_configured": true, 00:17:38.557 "data_offset": 2048, 00:17:38.557 "data_size": 63488 00:17:38.557 }, 00:17:38.557 { 00:17:38.557 "name": "BaseBdev2", 00:17:38.557 "uuid": "ad1e58e2-8dfd-5cb8-8a85-05176cc1694e", 00:17:38.557 "is_configured": true, 00:17:38.557 "data_offset": 2048, 00:17:38.557 "data_size": 63488 00:17:38.557 }, 00:17:38.557 { 00:17:38.557 "name": "BaseBdev3", 00:17:38.557 "uuid": "190fb495-c5cf-548f-bb00-6d1bc6cae248", 00:17:38.557 "is_configured": true, 00:17:38.557 "data_offset": 2048, 00:17:38.557 "data_size": 63488 00:17:38.557 } 00:17:38.557 ] 00:17:38.557 }' 00:17:38.557 16:31:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.557 16:31:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:38.557 16:31:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.557 16:31:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:38.557 16:31:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:39.492 16:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:39.492 16:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:39.492 16:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.492 16:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:39.492 16:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:39.492 16:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.492 16:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.492 16:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.492 16:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.492 16:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.492 16:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.492 16:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.492 "name": "raid_bdev1", 00:17:39.492 "uuid": "c445e848-5d28-4b94-aa8d-4cd211d57e8d", 00:17:39.492 "strip_size_kb": 64, 00:17:39.492 "state": "online", 00:17:39.492 "raid_level": "raid5f", 00:17:39.492 "superblock": true, 00:17:39.492 "num_base_bdevs": 3, 00:17:39.492 "num_base_bdevs_discovered": 3, 00:17:39.492 "num_base_bdevs_operational": 3, 00:17:39.492 "process": { 00:17:39.492 "type": "rebuild", 00:17:39.492 "target": "spare", 00:17:39.492 "progress": { 00:17:39.492 "blocks": 92160, 00:17:39.492 "percent": 72 00:17:39.492 } 00:17:39.492 }, 00:17:39.492 "base_bdevs_list": [ 00:17:39.492 { 00:17:39.492 "name": "spare", 00:17:39.492 "uuid": "209b868e-5866-5431-a116-2e891acf5db6", 00:17:39.492 "is_configured": true, 00:17:39.492 "data_offset": 2048, 00:17:39.492 "data_size": 63488 00:17:39.492 }, 00:17:39.492 { 00:17:39.492 "name": "BaseBdev2", 00:17:39.492 "uuid": "ad1e58e2-8dfd-5cb8-8a85-05176cc1694e", 00:17:39.492 "is_configured": true, 00:17:39.492 "data_offset": 2048, 00:17:39.492 "data_size": 63488 00:17:39.492 }, 00:17:39.492 { 00:17:39.492 "name": "BaseBdev3", 00:17:39.492 "uuid": "190fb495-c5cf-548f-bb00-6d1bc6cae248", 00:17:39.492 "is_configured": true, 00:17:39.492 "data_offset": 2048, 00:17:39.492 "data_size": 63488 00:17:39.492 } 00:17:39.492 ] 00:17:39.492 }' 00:17:39.492 16:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.492 16:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:39.492 16:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.492 16:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:39.492 16:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:40.866 16:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:40.866 16:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:40.866 16:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.866 16:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:40.866 16:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:40.866 16:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.866 16:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.866 16:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.866 16:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.866 16:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.866 16:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.866 16:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.866 "name": "raid_bdev1", 00:17:40.866 "uuid": "c445e848-5d28-4b94-aa8d-4cd211d57e8d", 00:17:40.866 "strip_size_kb": 64, 00:17:40.866 "state": "online", 00:17:40.866 "raid_level": "raid5f", 00:17:40.866 "superblock": true, 00:17:40.866 "num_base_bdevs": 3, 00:17:40.866 "num_base_bdevs_discovered": 3, 00:17:40.866 "num_base_bdevs_operational": 3, 00:17:40.866 "process": { 00:17:40.866 "type": "rebuild", 00:17:40.866 "target": "spare", 00:17:40.866 "progress": { 00:17:40.866 "blocks": 114688, 00:17:40.866 "percent": 90 00:17:40.866 } 00:17:40.866 }, 00:17:40.866 "base_bdevs_list": [ 00:17:40.867 { 00:17:40.867 "name": "spare", 00:17:40.867 "uuid": "209b868e-5866-5431-a116-2e891acf5db6", 00:17:40.867 "is_configured": true, 00:17:40.867 "data_offset": 2048, 00:17:40.867 "data_size": 63488 00:17:40.867 }, 00:17:40.867 { 00:17:40.867 "name": "BaseBdev2", 00:17:40.867 "uuid": "ad1e58e2-8dfd-5cb8-8a85-05176cc1694e", 00:17:40.867 "is_configured": true, 00:17:40.867 "data_offset": 2048, 00:17:40.867 "data_size": 63488 00:17:40.867 }, 00:17:40.867 { 00:17:40.867 "name": "BaseBdev3", 00:17:40.867 "uuid": "190fb495-c5cf-548f-bb00-6d1bc6cae248", 00:17:40.867 "is_configured": true, 00:17:40.867 "data_offset": 2048, 00:17:40.867 "data_size": 63488 00:17:40.867 } 00:17:40.867 ] 00:17:40.867 }' 00:17:40.867 16:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.867 16:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:40.867 16:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.867 16:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:40.867 16:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:41.125 [2024-11-05 16:31:54.070872] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:41.125 [2024-11-05 16:31:54.071102] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:41.125 [2024-11-05 16:31:54.071266] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:41.691 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:41.691 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:41.691 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.691 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:41.691 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:41.691 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.691 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.691 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.691 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.691 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.691 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.691 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.691 "name": "raid_bdev1", 00:17:41.691 "uuid": "c445e848-5d28-4b94-aa8d-4cd211d57e8d", 00:17:41.691 "strip_size_kb": 64, 00:17:41.691 "state": "online", 00:17:41.691 "raid_level": "raid5f", 00:17:41.691 "superblock": true, 00:17:41.691 "num_base_bdevs": 3, 00:17:41.691 "num_base_bdevs_discovered": 3, 00:17:41.691 "num_base_bdevs_operational": 3, 00:17:41.691 "base_bdevs_list": [ 00:17:41.691 { 00:17:41.691 "name": "spare", 00:17:41.691 "uuid": "209b868e-5866-5431-a116-2e891acf5db6", 00:17:41.691 "is_configured": true, 00:17:41.691 "data_offset": 2048, 00:17:41.691 "data_size": 63488 00:17:41.691 }, 00:17:41.691 { 00:17:41.691 "name": "BaseBdev2", 00:17:41.691 "uuid": "ad1e58e2-8dfd-5cb8-8a85-05176cc1694e", 00:17:41.691 "is_configured": true, 00:17:41.691 "data_offset": 2048, 00:17:41.691 "data_size": 63488 00:17:41.691 }, 00:17:41.691 { 00:17:41.691 "name": "BaseBdev3", 00:17:41.691 "uuid": "190fb495-c5cf-548f-bb00-6d1bc6cae248", 00:17:41.691 "is_configured": true, 00:17:41.691 "data_offset": 2048, 00:17:41.691 "data_size": 63488 00:17:41.691 } 00:17:41.691 ] 00:17:41.691 }' 00:17:41.691 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.691 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:41.691 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.950 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:41.950 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:41.950 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:41.950 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.950 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:41.950 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:41.950 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.950 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.950 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.950 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.950 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.950 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.950 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.950 "name": "raid_bdev1", 00:17:41.950 "uuid": "c445e848-5d28-4b94-aa8d-4cd211d57e8d", 00:17:41.950 "strip_size_kb": 64, 00:17:41.950 "state": "online", 00:17:41.950 "raid_level": "raid5f", 00:17:41.950 "superblock": true, 00:17:41.950 "num_base_bdevs": 3, 00:17:41.950 "num_base_bdevs_discovered": 3, 00:17:41.950 "num_base_bdevs_operational": 3, 00:17:41.950 "base_bdevs_list": [ 00:17:41.950 { 00:17:41.950 "name": "spare", 00:17:41.950 "uuid": "209b868e-5866-5431-a116-2e891acf5db6", 00:17:41.950 "is_configured": true, 00:17:41.950 "data_offset": 2048, 00:17:41.950 "data_size": 63488 00:17:41.950 }, 00:17:41.950 { 00:17:41.950 "name": "BaseBdev2", 00:17:41.950 "uuid": "ad1e58e2-8dfd-5cb8-8a85-05176cc1694e", 00:17:41.950 "is_configured": true, 00:17:41.950 "data_offset": 2048, 00:17:41.950 "data_size": 63488 00:17:41.950 }, 00:17:41.950 { 00:17:41.950 "name": "BaseBdev3", 00:17:41.950 "uuid": "190fb495-c5cf-548f-bb00-6d1bc6cae248", 00:17:41.950 "is_configured": true, 00:17:41.950 "data_offset": 2048, 00:17:41.950 "data_size": 63488 00:17:41.950 } 00:17:41.950 ] 00:17:41.950 }' 00:17:41.950 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.950 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:41.950 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.950 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:41.950 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:41.950 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.950 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:41.950 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:41.950 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:41.950 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:41.950 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.950 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.950 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.950 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.950 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.950 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.950 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.950 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.950 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.950 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.950 "name": "raid_bdev1", 00:17:41.950 "uuid": "c445e848-5d28-4b94-aa8d-4cd211d57e8d", 00:17:41.950 "strip_size_kb": 64, 00:17:41.950 "state": "online", 00:17:41.951 "raid_level": "raid5f", 00:17:41.951 "superblock": true, 00:17:41.951 "num_base_bdevs": 3, 00:17:41.951 "num_base_bdevs_discovered": 3, 00:17:41.951 "num_base_bdevs_operational": 3, 00:17:41.951 "base_bdevs_list": [ 00:17:41.951 { 00:17:41.951 "name": "spare", 00:17:41.951 "uuid": "209b868e-5866-5431-a116-2e891acf5db6", 00:17:41.951 "is_configured": true, 00:17:41.951 "data_offset": 2048, 00:17:41.951 "data_size": 63488 00:17:41.951 }, 00:17:41.951 { 00:17:41.951 "name": "BaseBdev2", 00:17:41.951 "uuid": "ad1e58e2-8dfd-5cb8-8a85-05176cc1694e", 00:17:41.951 "is_configured": true, 00:17:41.951 "data_offset": 2048, 00:17:41.951 "data_size": 63488 00:17:41.951 }, 00:17:41.951 { 00:17:41.951 "name": "BaseBdev3", 00:17:41.951 "uuid": "190fb495-c5cf-548f-bb00-6d1bc6cae248", 00:17:41.951 "is_configured": true, 00:17:41.951 "data_offset": 2048, 00:17:41.951 "data_size": 63488 00:17:41.951 } 00:17:41.951 ] 00:17:41.951 }' 00:17:41.951 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.951 16:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.518 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:42.518 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.518 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.518 [2024-11-05 16:31:55.412875] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:42.518 [2024-11-05 16:31:55.412919] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:42.518 [2024-11-05 16:31:55.413019] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:42.518 [2024-11-05 16:31:55.413109] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:42.518 [2024-11-05 16:31:55.413126] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:42.518 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.518 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.518 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.518 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.518 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:42.518 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.518 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:42.518 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:42.518 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:42.518 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:42.518 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:42.518 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:42.518 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:42.518 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:42.518 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:42.518 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:42.518 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:42.518 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:42.519 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:42.778 /dev/nbd0 00:17:42.778 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:42.778 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:42.778 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:42.778 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:17:42.778 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:42.778 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:42.778 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:42.778 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:17:42.778 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:42.778 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:42.778 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:42.778 1+0 records in 00:17:42.778 1+0 records out 00:17:42.778 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000587575 s, 7.0 MB/s 00:17:42.778 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:42.778 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:17:42.778 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:42.778 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:42.778 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:17:42.778 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:42.778 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:42.778 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:43.037 /dev/nbd1 00:17:43.037 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:43.037 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:43.037 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:17:43.037 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:17:43.037 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:43.037 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:43.037 16:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:17:43.037 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:17:43.037 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:43.037 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:43.037 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:43.037 1+0 records in 00:17:43.037 1+0 records out 00:17:43.037 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003455 s, 11.9 MB/s 00:17:43.037 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:43.037 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:17:43.037 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:43.037 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:43.037 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:17:43.037 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:43.037 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:43.037 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:43.297 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:43.297 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:43.297 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:43.297 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:43.297 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:43.297 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:43.297 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:43.557 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:43.557 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:43.557 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:43.557 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:43.557 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:43.557 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:43.557 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:43.557 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:43.557 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:43.557 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:43.816 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:43.816 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:43.816 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:43.816 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:43.816 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:43.816 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:43.816 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:43.816 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:43.816 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:43.816 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:43.816 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.816 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.816 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.816 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:43.816 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.816 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.816 [2024-11-05 16:31:56.735414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:43.816 [2024-11-05 16:31:56.735495] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:43.816 [2024-11-05 16:31:56.735520] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:43.816 [2024-11-05 16:31:56.735535] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:43.816 [2024-11-05 16:31:56.738489] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:43.816 [2024-11-05 16:31:56.738602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:43.816 [2024-11-05 16:31:56.738749] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:43.816 [2024-11-05 16:31:56.738836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:43.816 [2024-11-05 16:31:56.739033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:43.816 [2024-11-05 16:31:56.739141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:43.816 spare 00:17:43.816 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.816 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:43.816 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.816 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.816 [2024-11-05 16:31:56.839079] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:43.816 [2024-11-05 16:31:56.839246] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:43.816 [2024-11-05 16:31:56.839701] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:17:43.816 [2024-11-05 16:31:56.846231] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:43.816 [2024-11-05 16:31:56.846294] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:43.816 [2024-11-05 16:31:56.846624] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:43.816 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.816 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:43.816 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:43.816 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:43.816 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:43.816 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:43.816 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:43.816 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.816 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.816 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.816 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.816 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.816 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.816 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.816 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.816 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.075 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.075 "name": "raid_bdev1", 00:17:44.075 "uuid": "c445e848-5d28-4b94-aa8d-4cd211d57e8d", 00:17:44.075 "strip_size_kb": 64, 00:17:44.075 "state": "online", 00:17:44.075 "raid_level": "raid5f", 00:17:44.075 "superblock": true, 00:17:44.075 "num_base_bdevs": 3, 00:17:44.075 "num_base_bdevs_discovered": 3, 00:17:44.075 "num_base_bdevs_operational": 3, 00:17:44.075 "base_bdevs_list": [ 00:17:44.075 { 00:17:44.075 "name": "spare", 00:17:44.075 "uuid": "209b868e-5866-5431-a116-2e891acf5db6", 00:17:44.075 "is_configured": true, 00:17:44.075 "data_offset": 2048, 00:17:44.075 "data_size": 63488 00:17:44.075 }, 00:17:44.075 { 00:17:44.075 "name": "BaseBdev2", 00:17:44.075 "uuid": "ad1e58e2-8dfd-5cb8-8a85-05176cc1694e", 00:17:44.075 "is_configured": true, 00:17:44.075 "data_offset": 2048, 00:17:44.075 "data_size": 63488 00:17:44.075 }, 00:17:44.075 { 00:17:44.075 "name": "BaseBdev3", 00:17:44.075 "uuid": "190fb495-c5cf-548f-bb00-6d1bc6cae248", 00:17:44.075 "is_configured": true, 00:17:44.075 "data_offset": 2048, 00:17:44.075 "data_size": 63488 00:17:44.075 } 00:17:44.075 ] 00:17:44.075 }' 00:17:44.075 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.075 16:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.334 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:44.334 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:44.334 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:44.334 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:44.334 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:44.334 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.334 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.334 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.334 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.334 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.334 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:44.334 "name": "raid_bdev1", 00:17:44.334 "uuid": "c445e848-5d28-4b94-aa8d-4cd211d57e8d", 00:17:44.334 "strip_size_kb": 64, 00:17:44.334 "state": "online", 00:17:44.334 "raid_level": "raid5f", 00:17:44.334 "superblock": true, 00:17:44.334 "num_base_bdevs": 3, 00:17:44.334 "num_base_bdevs_discovered": 3, 00:17:44.334 "num_base_bdevs_operational": 3, 00:17:44.334 "base_bdevs_list": [ 00:17:44.334 { 00:17:44.334 "name": "spare", 00:17:44.334 "uuid": "209b868e-5866-5431-a116-2e891acf5db6", 00:17:44.334 "is_configured": true, 00:17:44.334 "data_offset": 2048, 00:17:44.334 "data_size": 63488 00:17:44.334 }, 00:17:44.334 { 00:17:44.334 "name": "BaseBdev2", 00:17:44.334 "uuid": "ad1e58e2-8dfd-5cb8-8a85-05176cc1694e", 00:17:44.334 "is_configured": true, 00:17:44.334 "data_offset": 2048, 00:17:44.334 "data_size": 63488 00:17:44.334 }, 00:17:44.334 { 00:17:44.334 "name": "BaseBdev3", 00:17:44.334 "uuid": "190fb495-c5cf-548f-bb00-6d1bc6cae248", 00:17:44.334 "is_configured": true, 00:17:44.334 "data_offset": 2048, 00:17:44.334 "data_size": 63488 00:17:44.334 } 00:17:44.334 ] 00:17:44.334 }' 00:17:44.334 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.630 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:44.630 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.630 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:44.630 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.630 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:44.630 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.630 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.630 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.630 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:44.630 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:44.630 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.630 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.630 [2024-11-05 16:31:57.532894] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:44.630 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.630 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:44.630 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:44.630 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:44.630 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:44.630 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:44.630 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:44.630 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.630 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.630 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.630 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.630 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.630 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.630 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.630 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.630 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.630 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.630 "name": "raid_bdev1", 00:17:44.630 "uuid": "c445e848-5d28-4b94-aa8d-4cd211d57e8d", 00:17:44.630 "strip_size_kb": 64, 00:17:44.630 "state": "online", 00:17:44.631 "raid_level": "raid5f", 00:17:44.631 "superblock": true, 00:17:44.631 "num_base_bdevs": 3, 00:17:44.631 "num_base_bdevs_discovered": 2, 00:17:44.631 "num_base_bdevs_operational": 2, 00:17:44.631 "base_bdevs_list": [ 00:17:44.631 { 00:17:44.631 "name": null, 00:17:44.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.631 "is_configured": false, 00:17:44.631 "data_offset": 0, 00:17:44.631 "data_size": 63488 00:17:44.631 }, 00:17:44.631 { 00:17:44.631 "name": "BaseBdev2", 00:17:44.631 "uuid": "ad1e58e2-8dfd-5cb8-8a85-05176cc1694e", 00:17:44.631 "is_configured": true, 00:17:44.631 "data_offset": 2048, 00:17:44.631 "data_size": 63488 00:17:44.631 }, 00:17:44.631 { 00:17:44.631 "name": "BaseBdev3", 00:17:44.631 "uuid": "190fb495-c5cf-548f-bb00-6d1bc6cae248", 00:17:44.631 "is_configured": true, 00:17:44.631 "data_offset": 2048, 00:17:44.631 "data_size": 63488 00:17:44.631 } 00:17:44.631 ] 00:17:44.631 }' 00:17:44.631 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.631 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.890 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:44.890 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.890 16:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.149 [2024-11-05 16:31:57.984532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:45.149 [2024-11-05 16:31:57.984910] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:45.149 [2024-11-05 16:31:57.984990] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:45.149 [2024-11-05 16:31:57.985551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:45.149 [2024-11-05 16:31:58.005396] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:17:45.149 16:31:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.149 16:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:45.149 [2024-11-05 16:31:58.015252] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:46.086 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:46.086 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.086 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:46.086 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:46.086 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:46.086 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.086 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.086 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.086 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.086 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.086 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:46.086 "name": "raid_bdev1", 00:17:46.086 "uuid": "c445e848-5d28-4b94-aa8d-4cd211d57e8d", 00:17:46.086 "strip_size_kb": 64, 00:17:46.086 "state": "online", 00:17:46.086 "raid_level": "raid5f", 00:17:46.086 "superblock": true, 00:17:46.086 "num_base_bdevs": 3, 00:17:46.086 "num_base_bdevs_discovered": 3, 00:17:46.086 "num_base_bdevs_operational": 3, 00:17:46.086 "process": { 00:17:46.086 "type": "rebuild", 00:17:46.086 "target": "spare", 00:17:46.086 "progress": { 00:17:46.086 "blocks": 20480, 00:17:46.086 "percent": 16 00:17:46.086 } 00:17:46.086 }, 00:17:46.086 "base_bdevs_list": [ 00:17:46.086 { 00:17:46.086 "name": "spare", 00:17:46.086 "uuid": "209b868e-5866-5431-a116-2e891acf5db6", 00:17:46.086 "is_configured": true, 00:17:46.086 "data_offset": 2048, 00:17:46.086 "data_size": 63488 00:17:46.086 }, 00:17:46.086 { 00:17:46.086 "name": "BaseBdev2", 00:17:46.086 "uuid": "ad1e58e2-8dfd-5cb8-8a85-05176cc1694e", 00:17:46.086 "is_configured": true, 00:17:46.086 "data_offset": 2048, 00:17:46.086 "data_size": 63488 00:17:46.086 }, 00:17:46.086 { 00:17:46.086 "name": "BaseBdev3", 00:17:46.086 "uuid": "190fb495-c5cf-548f-bb00-6d1bc6cae248", 00:17:46.086 "is_configured": true, 00:17:46.086 "data_offset": 2048, 00:17:46.086 "data_size": 63488 00:17:46.086 } 00:17:46.086 ] 00:17:46.086 }' 00:17:46.086 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.086 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:46.086 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.086 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:46.086 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:46.086 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.086 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.086 [2024-11-05 16:31:59.147554] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:46.344 [2024-11-05 16:31:59.227629] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:46.344 [2024-11-05 16:31:59.228269] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.344 [2024-11-05 16:31:59.228310] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:46.344 [2024-11-05 16:31:59.228326] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:46.344 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.344 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:46.344 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:46.344 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.344 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:46.344 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:46.344 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:46.344 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.344 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.344 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.344 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.344 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.344 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.344 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.344 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.344 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.344 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.344 "name": "raid_bdev1", 00:17:46.344 "uuid": "c445e848-5d28-4b94-aa8d-4cd211d57e8d", 00:17:46.344 "strip_size_kb": 64, 00:17:46.344 "state": "online", 00:17:46.344 "raid_level": "raid5f", 00:17:46.344 "superblock": true, 00:17:46.344 "num_base_bdevs": 3, 00:17:46.344 "num_base_bdevs_discovered": 2, 00:17:46.344 "num_base_bdevs_operational": 2, 00:17:46.344 "base_bdevs_list": [ 00:17:46.344 { 00:17:46.344 "name": null, 00:17:46.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.344 "is_configured": false, 00:17:46.344 "data_offset": 0, 00:17:46.344 "data_size": 63488 00:17:46.344 }, 00:17:46.344 { 00:17:46.344 "name": "BaseBdev2", 00:17:46.344 "uuid": "ad1e58e2-8dfd-5cb8-8a85-05176cc1694e", 00:17:46.344 "is_configured": true, 00:17:46.344 "data_offset": 2048, 00:17:46.344 "data_size": 63488 00:17:46.344 }, 00:17:46.344 { 00:17:46.344 "name": "BaseBdev3", 00:17:46.344 "uuid": "190fb495-c5cf-548f-bb00-6d1bc6cae248", 00:17:46.344 "is_configured": true, 00:17:46.344 "data_offset": 2048, 00:17:46.344 "data_size": 63488 00:17:46.344 } 00:17:46.344 ] 00:17:46.344 }' 00:17:46.344 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.344 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.911 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:46.911 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.911 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.911 [2024-11-05 16:31:59.735943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:46.911 [2024-11-05 16:31:59.736137] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.911 [2024-11-05 16:31:59.736193] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:17:46.911 [2024-11-05 16:31:59.736266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.911 [2024-11-05 16:31:59.736933] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.911 [2024-11-05 16:31:59.737033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:46.911 [2024-11-05 16:31:59.737209] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:46.911 [2024-11-05 16:31:59.737271] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:46.911 [2024-11-05 16:31:59.737334] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:46.911 [2024-11-05 16:31:59.737428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:46.911 [2024-11-05 16:31:59.757906] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:17:46.911 spare 00:17:46.911 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.911 16:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:46.911 [2024-11-05 16:31:59.767855] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:47.843 16:32:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:47.843 16:32:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:47.843 16:32:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:47.843 16:32:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:47.843 16:32:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:47.843 16:32:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.843 16:32:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.843 16:32:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.843 16:32:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.843 16:32:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.843 16:32:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.843 "name": "raid_bdev1", 00:17:47.843 "uuid": "c445e848-5d28-4b94-aa8d-4cd211d57e8d", 00:17:47.843 "strip_size_kb": 64, 00:17:47.843 "state": "online", 00:17:47.843 "raid_level": "raid5f", 00:17:47.843 "superblock": true, 00:17:47.843 "num_base_bdevs": 3, 00:17:47.843 "num_base_bdevs_discovered": 3, 00:17:47.843 "num_base_bdevs_operational": 3, 00:17:47.843 "process": { 00:17:47.843 "type": "rebuild", 00:17:47.843 "target": "spare", 00:17:47.843 "progress": { 00:17:47.843 "blocks": 20480, 00:17:47.843 "percent": 16 00:17:47.843 } 00:17:47.843 }, 00:17:47.843 "base_bdevs_list": [ 00:17:47.843 { 00:17:47.843 "name": "spare", 00:17:47.843 "uuid": "209b868e-5866-5431-a116-2e891acf5db6", 00:17:47.843 "is_configured": true, 00:17:47.843 "data_offset": 2048, 00:17:47.843 "data_size": 63488 00:17:47.843 }, 00:17:47.843 { 00:17:47.843 "name": "BaseBdev2", 00:17:47.843 "uuid": "ad1e58e2-8dfd-5cb8-8a85-05176cc1694e", 00:17:47.843 "is_configured": true, 00:17:47.843 "data_offset": 2048, 00:17:47.843 "data_size": 63488 00:17:47.843 }, 00:17:47.843 { 00:17:47.843 "name": "BaseBdev3", 00:17:47.843 "uuid": "190fb495-c5cf-548f-bb00-6d1bc6cae248", 00:17:47.843 "is_configured": true, 00:17:47.843 "data_offset": 2048, 00:17:47.843 "data_size": 63488 00:17:47.843 } 00:17:47.843 ] 00:17:47.843 }' 00:17:47.843 16:32:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:47.843 16:32:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:47.843 16:32:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:47.843 16:32:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:47.843 16:32:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:47.843 16:32:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.843 16:32:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.843 [2024-11-05 16:32:00.924026] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:48.101 [2024-11-05 16:32:00.980151] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:48.101 [2024-11-05 16:32:00.980342] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.101 [2024-11-05 16:32:00.980389] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:48.101 [2024-11-05 16:32:00.980401] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:48.101 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.101 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:48.101 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.101 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:48.101 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:48.101 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:48.101 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:48.101 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.101 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.101 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.101 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.101 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.101 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.101 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.101 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.101 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.101 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.101 "name": "raid_bdev1", 00:17:48.101 "uuid": "c445e848-5d28-4b94-aa8d-4cd211d57e8d", 00:17:48.101 "strip_size_kb": 64, 00:17:48.101 "state": "online", 00:17:48.101 "raid_level": "raid5f", 00:17:48.102 "superblock": true, 00:17:48.102 "num_base_bdevs": 3, 00:17:48.102 "num_base_bdevs_discovered": 2, 00:17:48.102 "num_base_bdevs_operational": 2, 00:17:48.102 "base_bdevs_list": [ 00:17:48.102 { 00:17:48.102 "name": null, 00:17:48.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.102 "is_configured": false, 00:17:48.102 "data_offset": 0, 00:17:48.102 "data_size": 63488 00:17:48.102 }, 00:17:48.102 { 00:17:48.102 "name": "BaseBdev2", 00:17:48.102 "uuid": "ad1e58e2-8dfd-5cb8-8a85-05176cc1694e", 00:17:48.102 "is_configured": true, 00:17:48.102 "data_offset": 2048, 00:17:48.102 "data_size": 63488 00:17:48.102 }, 00:17:48.102 { 00:17:48.102 "name": "BaseBdev3", 00:17:48.102 "uuid": "190fb495-c5cf-548f-bb00-6d1bc6cae248", 00:17:48.102 "is_configured": true, 00:17:48.102 "data_offset": 2048, 00:17:48.102 "data_size": 63488 00:17:48.102 } 00:17:48.102 ] 00:17:48.102 }' 00:17:48.102 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.102 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.361 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:48.361 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:48.361 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:48.361 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:48.361 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:48.361 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.361 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.361 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.361 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.619 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.620 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:48.620 "name": "raid_bdev1", 00:17:48.620 "uuid": "c445e848-5d28-4b94-aa8d-4cd211d57e8d", 00:17:48.620 "strip_size_kb": 64, 00:17:48.620 "state": "online", 00:17:48.620 "raid_level": "raid5f", 00:17:48.620 "superblock": true, 00:17:48.620 "num_base_bdevs": 3, 00:17:48.620 "num_base_bdevs_discovered": 2, 00:17:48.620 "num_base_bdevs_operational": 2, 00:17:48.620 "base_bdevs_list": [ 00:17:48.620 { 00:17:48.620 "name": null, 00:17:48.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.620 "is_configured": false, 00:17:48.620 "data_offset": 0, 00:17:48.620 "data_size": 63488 00:17:48.620 }, 00:17:48.620 { 00:17:48.620 "name": "BaseBdev2", 00:17:48.620 "uuid": "ad1e58e2-8dfd-5cb8-8a85-05176cc1694e", 00:17:48.620 "is_configured": true, 00:17:48.620 "data_offset": 2048, 00:17:48.620 "data_size": 63488 00:17:48.620 }, 00:17:48.620 { 00:17:48.620 "name": "BaseBdev3", 00:17:48.620 "uuid": "190fb495-c5cf-548f-bb00-6d1bc6cae248", 00:17:48.620 "is_configured": true, 00:17:48.620 "data_offset": 2048, 00:17:48.620 "data_size": 63488 00:17:48.620 } 00:17:48.620 ] 00:17:48.620 }' 00:17:48.620 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.620 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:48.620 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.620 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:48.620 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:48.620 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.620 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.620 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.620 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:48.620 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.620 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.620 [2024-11-05 16:32:01.592125] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:48.620 [2024-11-05 16:32:01.592211] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.620 [2024-11-05 16:32:01.592244] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:48.620 [2024-11-05 16:32:01.592257] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.620 [2024-11-05 16:32:01.592860] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.620 [2024-11-05 16:32:01.592889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:48.620 [2024-11-05 16:32:01.592988] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:48.620 [2024-11-05 16:32:01.593005] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:48.620 [2024-11-05 16:32:01.593029] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:48.620 [2024-11-05 16:32:01.593045] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:48.620 BaseBdev1 00:17:48.620 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.620 16:32:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:49.556 16:32:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:49.556 16:32:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.556 16:32:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.556 16:32:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:49.556 16:32:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:49.556 16:32:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:49.556 16:32:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.556 16:32:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.556 16:32:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.556 16:32:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.556 16:32:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.556 16:32:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.556 16:32:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.556 16:32:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.556 16:32:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.556 16:32:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.556 "name": "raid_bdev1", 00:17:49.556 "uuid": "c445e848-5d28-4b94-aa8d-4cd211d57e8d", 00:17:49.556 "strip_size_kb": 64, 00:17:49.556 "state": "online", 00:17:49.556 "raid_level": "raid5f", 00:17:49.556 "superblock": true, 00:17:49.556 "num_base_bdevs": 3, 00:17:49.556 "num_base_bdevs_discovered": 2, 00:17:49.556 "num_base_bdevs_operational": 2, 00:17:49.556 "base_bdevs_list": [ 00:17:49.556 { 00:17:49.556 "name": null, 00:17:49.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.556 "is_configured": false, 00:17:49.556 "data_offset": 0, 00:17:49.556 "data_size": 63488 00:17:49.556 }, 00:17:49.556 { 00:17:49.556 "name": "BaseBdev2", 00:17:49.556 "uuid": "ad1e58e2-8dfd-5cb8-8a85-05176cc1694e", 00:17:49.556 "is_configured": true, 00:17:49.556 "data_offset": 2048, 00:17:49.556 "data_size": 63488 00:17:49.556 }, 00:17:49.556 { 00:17:49.556 "name": "BaseBdev3", 00:17:49.556 "uuid": "190fb495-c5cf-548f-bb00-6d1bc6cae248", 00:17:49.556 "is_configured": true, 00:17:49.556 "data_offset": 2048, 00:17:49.556 "data_size": 63488 00:17:49.556 } 00:17:49.556 ] 00:17:49.556 }' 00:17:49.556 16:32:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.556 16:32:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.122 16:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:50.123 16:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:50.123 16:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:50.123 16:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:50.123 16:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:50.123 16:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.123 16:32:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.123 16:32:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.123 16:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.123 16:32:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.123 16:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:50.123 "name": "raid_bdev1", 00:17:50.123 "uuid": "c445e848-5d28-4b94-aa8d-4cd211d57e8d", 00:17:50.123 "strip_size_kb": 64, 00:17:50.123 "state": "online", 00:17:50.123 "raid_level": "raid5f", 00:17:50.123 "superblock": true, 00:17:50.123 "num_base_bdevs": 3, 00:17:50.123 "num_base_bdevs_discovered": 2, 00:17:50.123 "num_base_bdevs_operational": 2, 00:17:50.123 "base_bdevs_list": [ 00:17:50.123 { 00:17:50.123 "name": null, 00:17:50.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.123 "is_configured": false, 00:17:50.123 "data_offset": 0, 00:17:50.123 "data_size": 63488 00:17:50.123 }, 00:17:50.123 { 00:17:50.123 "name": "BaseBdev2", 00:17:50.123 "uuid": "ad1e58e2-8dfd-5cb8-8a85-05176cc1694e", 00:17:50.123 "is_configured": true, 00:17:50.123 "data_offset": 2048, 00:17:50.123 "data_size": 63488 00:17:50.123 }, 00:17:50.123 { 00:17:50.123 "name": "BaseBdev3", 00:17:50.123 "uuid": "190fb495-c5cf-548f-bb00-6d1bc6cae248", 00:17:50.123 "is_configured": true, 00:17:50.123 "data_offset": 2048, 00:17:50.123 "data_size": 63488 00:17:50.123 } 00:17:50.123 ] 00:17:50.123 }' 00:17:50.123 16:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:50.123 16:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:50.123 16:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:50.123 16:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:50.123 16:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:50.123 16:32:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:17:50.123 16:32:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:50.123 16:32:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:50.123 16:32:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:50.123 16:32:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:50.123 16:32:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:50.123 16:32:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:50.123 16:32:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.123 16:32:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.123 [2024-11-05 16:32:03.173734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:50.123 [2024-11-05 16:32:03.174021] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:50.123 [2024-11-05 16:32:03.174106] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:50.123 request: 00:17:50.123 { 00:17:50.123 "base_bdev": "BaseBdev1", 00:17:50.123 "raid_bdev": "raid_bdev1", 00:17:50.123 "method": "bdev_raid_add_base_bdev", 00:17:50.123 "req_id": 1 00:17:50.123 } 00:17:50.123 Got JSON-RPC error response 00:17:50.123 response: 00:17:50.123 { 00:17:50.123 "code": -22, 00:17:50.123 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:50.123 } 00:17:50.123 16:32:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:50.123 16:32:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:17:50.123 16:32:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:50.123 16:32:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:50.123 16:32:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:50.123 16:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:51.588 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:51.588 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.588 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.588 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:51.588 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:51.588 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:51.588 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.588 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.588 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.588 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.588 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.588 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.588 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.588 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.588 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.588 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.588 "name": "raid_bdev1", 00:17:51.588 "uuid": "c445e848-5d28-4b94-aa8d-4cd211d57e8d", 00:17:51.588 "strip_size_kb": 64, 00:17:51.588 "state": "online", 00:17:51.588 "raid_level": "raid5f", 00:17:51.588 "superblock": true, 00:17:51.588 "num_base_bdevs": 3, 00:17:51.588 "num_base_bdevs_discovered": 2, 00:17:51.588 "num_base_bdevs_operational": 2, 00:17:51.588 "base_bdevs_list": [ 00:17:51.588 { 00:17:51.588 "name": null, 00:17:51.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.588 "is_configured": false, 00:17:51.588 "data_offset": 0, 00:17:51.588 "data_size": 63488 00:17:51.588 }, 00:17:51.588 { 00:17:51.588 "name": "BaseBdev2", 00:17:51.588 "uuid": "ad1e58e2-8dfd-5cb8-8a85-05176cc1694e", 00:17:51.588 "is_configured": true, 00:17:51.588 "data_offset": 2048, 00:17:51.588 "data_size": 63488 00:17:51.588 }, 00:17:51.588 { 00:17:51.588 "name": "BaseBdev3", 00:17:51.588 "uuid": "190fb495-c5cf-548f-bb00-6d1bc6cae248", 00:17:51.588 "is_configured": true, 00:17:51.588 "data_offset": 2048, 00:17:51.588 "data_size": 63488 00:17:51.588 } 00:17:51.588 ] 00:17:51.588 }' 00:17:51.588 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.588 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.588 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:51.588 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:51.588 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:51.588 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:51.588 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:51.588 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.588 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.588 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.588 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.588 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.847 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:51.847 "name": "raid_bdev1", 00:17:51.847 "uuid": "c445e848-5d28-4b94-aa8d-4cd211d57e8d", 00:17:51.847 "strip_size_kb": 64, 00:17:51.848 "state": "online", 00:17:51.848 "raid_level": "raid5f", 00:17:51.848 "superblock": true, 00:17:51.848 "num_base_bdevs": 3, 00:17:51.848 "num_base_bdevs_discovered": 2, 00:17:51.848 "num_base_bdevs_operational": 2, 00:17:51.848 "base_bdevs_list": [ 00:17:51.848 { 00:17:51.848 "name": null, 00:17:51.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.848 "is_configured": false, 00:17:51.848 "data_offset": 0, 00:17:51.848 "data_size": 63488 00:17:51.848 }, 00:17:51.848 { 00:17:51.848 "name": "BaseBdev2", 00:17:51.848 "uuid": "ad1e58e2-8dfd-5cb8-8a85-05176cc1694e", 00:17:51.848 "is_configured": true, 00:17:51.848 "data_offset": 2048, 00:17:51.848 "data_size": 63488 00:17:51.848 }, 00:17:51.848 { 00:17:51.848 "name": "BaseBdev3", 00:17:51.848 "uuid": "190fb495-c5cf-548f-bb00-6d1bc6cae248", 00:17:51.848 "is_configured": true, 00:17:51.848 "data_offset": 2048, 00:17:51.848 "data_size": 63488 00:17:51.848 } 00:17:51.848 ] 00:17:51.848 }' 00:17:51.848 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:51.848 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:51.848 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:51.848 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:51.848 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82369 00:17:51.848 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 82369 ']' 00:17:51.848 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 82369 00:17:51.848 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:17:51.848 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:51.848 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82369 00:17:51.848 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:51.848 killing process with pid 82369 00:17:51.848 Received shutdown signal, test time was about 60.000000 seconds 00:17:51.848 00:17:51.848 Latency(us) 00:17:51.848 [2024-11-05T16:32:04.936Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.848 [2024-11-05T16:32:04.936Z] =================================================================================================================== 00:17:51.848 [2024-11-05T16:32:04.936Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:51.848 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:51.848 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82369' 00:17:51.848 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 82369 00:17:51.848 [2024-11-05 16:32:04.822194] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:51.848 16:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 82369 00:17:51.848 [2024-11-05 16:32:04.822358] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:51.848 [2024-11-05 16:32:04.822438] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:51.848 [2024-11-05 16:32:04.822455] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:52.415 [2024-11-05 16:32:05.309943] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:53.793 16:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:53.793 00:17:53.793 real 0m24.006s 00:17:53.793 user 0m30.574s 00:17:53.793 sys 0m2.904s 00:17:53.793 ************************************ 00:17:53.793 END TEST raid5f_rebuild_test_sb 00:17:53.793 16:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:53.793 16:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.793 ************************************ 00:17:53.793 16:32:06 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:17:53.793 16:32:06 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:17:53.793 16:32:06 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:53.793 16:32:06 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:53.793 16:32:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:53.793 ************************************ 00:17:53.793 START TEST raid5f_state_function_test 00:17:53.793 ************************************ 00:17:53.793 16:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 4 false 00:17:53.793 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:53.793 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:53.793 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:17:53.793 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:53.793 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:53.793 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:53.793 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:53.793 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:53.793 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:53.793 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:53.793 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:53.793 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:53.793 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:53.793 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:53.793 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:53.793 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:53.793 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:53.793 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:53.793 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:53.793 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:53.793 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:53.793 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:53.793 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:53.793 Process raid pid: 83127 00:17:53.793 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:53.793 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:53.793 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:53.793 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:53.793 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:17:53.793 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:17:53.793 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83127 00:17:53.793 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83127' 00:17:53.793 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83127 00:17:53.793 16:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 83127 ']' 00:17:53.793 16:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.793 16:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:53.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.793 16:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.793 16:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:53.793 16:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.793 16:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:53.793 [2024-11-05 16:32:06.840025] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:17:53.793 [2024-11-05 16:32:06.840194] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:54.052 [2024-11-05 16:32:07.014491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.310 [2024-11-05 16:32:07.189716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.569 [2024-11-05 16:32:07.477730] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:54.569 [2024-11-05 16:32:07.477794] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:54.827 16:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:54.827 16:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:17:54.827 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:54.827 16:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.827 16:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.827 [2024-11-05 16:32:07.805743] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:54.827 [2024-11-05 16:32:07.805930] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:54.827 [2024-11-05 16:32:07.805974] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:54.827 [2024-11-05 16:32:07.806003] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:54.827 [2024-11-05 16:32:07.806034] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:54.827 [2024-11-05 16:32:07.806063] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:54.828 [2024-11-05 16:32:07.806092] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:54.828 [2024-11-05 16:32:07.806129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:54.828 16:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.828 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:54.828 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:54.828 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:54.828 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:54.828 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:54.828 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:54.828 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.828 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.828 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.828 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.828 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.828 16:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.828 16:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.828 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.828 16:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.828 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.828 "name": "Existed_Raid", 00:17:54.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.828 "strip_size_kb": 64, 00:17:54.828 "state": "configuring", 00:17:54.828 "raid_level": "raid5f", 00:17:54.828 "superblock": false, 00:17:54.828 "num_base_bdevs": 4, 00:17:54.828 "num_base_bdevs_discovered": 0, 00:17:54.828 "num_base_bdevs_operational": 4, 00:17:54.828 "base_bdevs_list": [ 00:17:54.828 { 00:17:54.828 "name": "BaseBdev1", 00:17:54.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.828 "is_configured": false, 00:17:54.828 "data_offset": 0, 00:17:54.828 "data_size": 0 00:17:54.828 }, 00:17:54.828 { 00:17:54.828 "name": "BaseBdev2", 00:17:54.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.828 "is_configured": false, 00:17:54.828 "data_offset": 0, 00:17:54.828 "data_size": 0 00:17:54.828 }, 00:17:54.828 { 00:17:54.828 "name": "BaseBdev3", 00:17:54.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.828 "is_configured": false, 00:17:54.828 "data_offset": 0, 00:17:54.828 "data_size": 0 00:17:54.828 }, 00:17:54.828 { 00:17:54.828 "name": "BaseBdev4", 00:17:54.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.828 "is_configured": false, 00:17:54.828 "data_offset": 0, 00:17:54.828 "data_size": 0 00:17:54.828 } 00:17:54.828 ] 00:17:54.828 }' 00:17:54.828 16:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.828 16:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.395 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:55.395 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.395 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.395 [2024-11-05 16:32:08.248907] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:55.395 [2024-11-05 16:32:08.249009] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:55.395 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.395 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:55.395 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.395 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.395 [2024-11-05 16:32:08.256917] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:55.395 [2024-11-05 16:32:08.256972] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:55.395 [2024-11-05 16:32:08.256984] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:55.395 [2024-11-05 16:32:08.256995] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:55.395 [2024-11-05 16:32:08.257003] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:55.395 [2024-11-05 16:32:08.257014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:55.395 [2024-11-05 16:32:08.257021] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:55.395 [2024-11-05 16:32:08.257031] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:55.395 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.395 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:55.395 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.395 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.395 [2024-11-05 16:32:08.310076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:55.395 BaseBdev1 00:17:55.395 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.395 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:55.395 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:55.395 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:55.395 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:55.395 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:55.395 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:55.395 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:55.395 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.395 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.395 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.395 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:55.395 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.395 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.395 [ 00:17:55.395 { 00:17:55.395 "name": "BaseBdev1", 00:17:55.395 "aliases": [ 00:17:55.395 "07e4d4be-d99f-47ba-a24e-9bbf3c281231" 00:17:55.395 ], 00:17:55.395 "product_name": "Malloc disk", 00:17:55.395 "block_size": 512, 00:17:55.395 "num_blocks": 65536, 00:17:55.395 "uuid": "07e4d4be-d99f-47ba-a24e-9bbf3c281231", 00:17:55.395 "assigned_rate_limits": { 00:17:55.395 "rw_ios_per_sec": 0, 00:17:55.395 "rw_mbytes_per_sec": 0, 00:17:55.395 "r_mbytes_per_sec": 0, 00:17:55.395 "w_mbytes_per_sec": 0 00:17:55.395 }, 00:17:55.395 "claimed": true, 00:17:55.395 "claim_type": "exclusive_write", 00:17:55.395 "zoned": false, 00:17:55.395 "supported_io_types": { 00:17:55.395 "read": true, 00:17:55.396 "write": true, 00:17:55.396 "unmap": true, 00:17:55.396 "flush": true, 00:17:55.396 "reset": true, 00:17:55.396 "nvme_admin": false, 00:17:55.396 "nvme_io": false, 00:17:55.396 "nvme_io_md": false, 00:17:55.396 "write_zeroes": true, 00:17:55.396 "zcopy": true, 00:17:55.396 "get_zone_info": false, 00:17:55.396 "zone_management": false, 00:17:55.396 "zone_append": false, 00:17:55.396 "compare": false, 00:17:55.396 "compare_and_write": false, 00:17:55.396 "abort": true, 00:17:55.396 "seek_hole": false, 00:17:55.396 "seek_data": false, 00:17:55.396 "copy": true, 00:17:55.396 "nvme_iov_md": false 00:17:55.396 }, 00:17:55.396 "memory_domains": [ 00:17:55.396 { 00:17:55.396 "dma_device_id": "system", 00:17:55.396 "dma_device_type": 1 00:17:55.396 }, 00:17:55.396 { 00:17:55.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.396 "dma_device_type": 2 00:17:55.396 } 00:17:55.396 ], 00:17:55.396 "driver_specific": {} 00:17:55.396 } 00:17:55.396 ] 00:17:55.396 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.396 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:55.396 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:55.396 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:55.396 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:55.396 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:55.396 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:55.396 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:55.396 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.396 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.396 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.396 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.396 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.396 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.396 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.396 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.396 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.396 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.396 "name": "Existed_Raid", 00:17:55.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.396 "strip_size_kb": 64, 00:17:55.396 "state": "configuring", 00:17:55.396 "raid_level": "raid5f", 00:17:55.396 "superblock": false, 00:17:55.396 "num_base_bdevs": 4, 00:17:55.396 "num_base_bdevs_discovered": 1, 00:17:55.396 "num_base_bdevs_operational": 4, 00:17:55.396 "base_bdevs_list": [ 00:17:55.396 { 00:17:55.396 "name": "BaseBdev1", 00:17:55.396 "uuid": "07e4d4be-d99f-47ba-a24e-9bbf3c281231", 00:17:55.396 "is_configured": true, 00:17:55.396 "data_offset": 0, 00:17:55.396 "data_size": 65536 00:17:55.396 }, 00:17:55.396 { 00:17:55.396 "name": "BaseBdev2", 00:17:55.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.396 "is_configured": false, 00:17:55.396 "data_offset": 0, 00:17:55.396 "data_size": 0 00:17:55.396 }, 00:17:55.396 { 00:17:55.396 "name": "BaseBdev3", 00:17:55.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.396 "is_configured": false, 00:17:55.396 "data_offset": 0, 00:17:55.396 "data_size": 0 00:17:55.396 }, 00:17:55.396 { 00:17:55.396 "name": "BaseBdev4", 00:17:55.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.396 "is_configured": false, 00:17:55.396 "data_offset": 0, 00:17:55.396 "data_size": 0 00:17:55.396 } 00:17:55.396 ] 00:17:55.396 }' 00:17:55.396 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.396 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.964 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:55.964 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.964 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.964 [2024-11-05 16:32:08.773663] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:55.965 [2024-11-05 16:32:08.773799] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:55.965 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.965 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:55.965 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.965 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.965 [2024-11-05 16:32:08.781760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:55.965 [2024-11-05 16:32:08.784204] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:55.965 [2024-11-05 16:32:08.784316] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:55.965 [2024-11-05 16:32:08.784442] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:55.965 [2024-11-05 16:32:08.784486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:55.965 [2024-11-05 16:32:08.784527] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:55.965 [2024-11-05 16:32:08.784564] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:55.965 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.965 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:55.965 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:55.965 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:55.965 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:55.965 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:55.965 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:55.965 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:55.965 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:55.965 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.965 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.965 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.965 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.965 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.965 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.965 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.965 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.965 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.965 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.965 "name": "Existed_Raid", 00:17:55.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.965 "strip_size_kb": 64, 00:17:55.965 "state": "configuring", 00:17:55.965 "raid_level": "raid5f", 00:17:55.965 "superblock": false, 00:17:55.965 "num_base_bdevs": 4, 00:17:55.965 "num_base_bdevs_discovered": 1, 00:17:55.965 "num_base_bdevs_operational": 4, 00:17:55.965 "base_bdevs_list": [ 00:17:55.965 { 00:17:55.965 "name": "BaseBdev1", 00:17:55.965 "uuid": "07e4d4be-d99f-47ba-a24e-9bbf3c281231", 00:17:55.965 "is_configured": true, 00:17:55.965 "data_offset": 0, 00:17:55.965 "data_size": 65536 00:17:55.965 }, 00:17:55.965 { 00:17:55.965 "name": "BaseBdev2", 00:17:55.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.965 "is_configured": false, 00:17:55.965 "data_offset": 0, 00:17:55.965 "data_size": 0 00:17:55.965 }, 00:17:55.965 { 00:17:55.965 "name": "BaseBdev3", 00:17:55.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.965 "is_configured": false, 00:17:55.965 "data_offset": 0, 00:17:55.965 "data_size": 0 00:17:55.965 }, 00:17:55.965 { 00:17:55.965 "name": "BaseBdev4", 00:17:55.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.965 "is_configured": false, 00:17:55.965 "data_offset": 0, 00:17:55.965 "data_size": 0 00:17:55.965 } 00:17:55.965 ] 00:17:55.965 }' 00:17:55.965 16:32:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.965 16:32:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.223 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:56.223 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.223 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.223 [2024-11-05 16:32:09.210074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:56.223 BaseBdev2 00:17:56.223 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.223 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:56.223 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:56.223 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:56.223 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:56.223 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:56.223 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:56.223 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:56.223 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.223 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.224 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.224 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:56.224 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.224 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.224 [ 00:17:56.224 { 00:17:56.224 "name": "BaseBdev2", 00:17:56.224 "aliases": [ 00:17:56.224 "772e2a6e-75b4-4088-85b9-94d44c6c6a0e" 00:17:56.224 ], 00:17:56.224 "product_name": "Malloc disk", 00:17:56.224 "block_size": 512, 00:17:56.224 "num_blocks": 65536, 00:17:56.224 "uuid": "772e2a6e-75b4-4088-85b9-94d44c6c6a0e", 00:17:56.224 "assigned_rate_limits": { 00:17:56.224 "rw_ios_per_sec": 0, 00:17:56.224 "rw_mbytes_per_sec": 0, 00:17:56.224 "r_mbytes_per_sec": 0, 00:17:56.224 "w_mbytes_per_sec": 0 00:17:56.224 }, 00:17:56.224 "claimed": true, 00:17:56.224 "claim_type": "exclusive_write", 00:17:56.224 "zoned": false, 00:17:56.224 "supported_io_types": { 00:17:56.224 "read": true, 00:17:56.224 "write": true, 00:17:56.224 "unmap": true, 00:17:56.224 "flush": true, 00:17:56.224 "reset": true, 00:17:56.224 "nvme_admin": false, 00:17:56.224 "nvme_io": false, 00:17:56.224 "nvme_io_md": false, 00:17:56.224 "write_zeroes": true, 00:17:56.224 "zcopy": true, 00:17:56.224 "get_zone_info": false, 00:17:56.224 "zone_management": false, 00:17:56.224 "zone_append": false, 00:17:56.224 "compare": false, 00:17:56.224 "compare_and_write": false, 00:17:56.224 "abort": true, 00:17:56.224 "seek_hole": false, 00:17:56.224 "seek_data": false, 00:17:56.224 "copy": true, 00:17:56.224 "nvme_iov_md": false 00:17:56.224 }, 00:17:56.224 "memory_domains": [ 00:17:56.224 { 00:17:56.224 "dma_device_id": "system", 00:17:56.224 "dma_device_type": 1 00:17:56.224 }, 00:17:56.224 { 00:17:56.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.224 "dma_device_type": 2 00:17:56.224 } 00:17:56.224 ], 00:17:56.224 "driver_specific": {} 00:17:56.224 } 00:17:56.224 ] 00:17:56.224 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.224 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:56.224 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:56.224 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:56.224 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:56.224 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:56.224 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:56.224 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:56.224 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:56.224 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:56.224 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.224 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.224 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.224 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.224 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.224 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.224 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.224 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.224 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.224 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.224 "name": "Existed_Raid", 00:17:56.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.224 "strip_size_kb": 64, 00:17:56.224 "state": "configuring", 00:17:56.224 "raid_level": "raid5f", 00:17:56.224 "superblock": false, 00:17:56.224 "num_base_bdevs": 4, 00:17:56.224 "num_base_bdevs_discovered": 2, 00:17:56.224 "num_base_bdevs_operational": 4, 00:17:56.224 "base_bdevs_list": [ 00:17:56.224 { 00:17:56.224 "name": "BaseBdev1", 00:17:56.224 "uuid": "07e4d4be-d99f-47ba-a24e-9bbf3c281231", 00:17:56.224 "is_configured": true, 00:17:56.224 "data_offset": 0, 00:17:56.224 "data_size": 65536 00:17:56.224 }, 00:17:56.224 { 00:17:56.224 "name": "BaseBdev2", 00:17:56.224 "uuid": "772e2a6e-75b4-4088-85b9-94d44c6c6a0e", 00:17:56.224 "is_configured": true, 00:17:56.224 "data_offset": 0, 00:17:56.224 "data_size": 65536 00:17:56.224 }, 00:17:56.224 { 00:17:56.224 "name": "BaseBdev3", 00:17:56.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.224 "is_configured": false, 00:17:56.224 "data_offset": 0, 00:17:56.224 "data_size": 0 00:17:56.224 }, 00:17:56.224 { 00:17:56.224 "name": "BaseBdev4", 00:17:56.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.224 "is_configured": false, 00:17:56.224 "data_offset": 0, 00:17:56.224 "data_size": 0 00:17:56.224 } 00:17:56.224 ] 00:17:56.224 }' 00:17:56.224 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.224 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.791 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:56.791 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.791 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.791 [2024-11-05 16:32:09.747677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:56.791 BaseBdev3 00:17:56.791 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.791 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:56.791 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:56.791 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:56.791 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:56.791 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:56.791 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:56.791 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:56.791 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.791 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.791 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.792 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:56.792 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.792 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.792 [ 00:17:56.792 { 00:17:56.792 "name": "BaseBdev3", 00:17:56.792 "aliases": [ 00:17:56.792 "783dc1fd-ff3b-4651-b416-ce4ab88c7117" 00:17:56.792 ], 00:17:56.792 "product_name": "Malloc disk", 00:17:56.792 "block_size": 512, 00:17:56.792 "num_blocks": 65536, 00:17:56.792 "uuid": "783dc1fd-ff3b-4651-b416-ce4ab88c7117", 00:17:56.792 "assigned_rate_limits": { 00:17:56.792 "rw_ios_per_sec": 0, 00:17:56.792 "rw_mbytes_per_sec": 0, 00:17:56.792 "r_mbytes_per_sec": 0, 00:17:56.792 "w_mbytes_per_sec": 0 00:17:56.792 }, 00:17:56.792 "claimed": true, 00:17:56.792 "claim_type": "exclusive_write", 00:17:56.792 "zoned": false, 00:17:56.792 "supported_io_types": { 00:17:56.792 "read": true, 00:17:56.792 "write": true, 00:17:56.792 "unmap": true, 00:17:56.792 "flush": true, 00:17:56.792 "reset": true, 00:17:56.792 "nvme_admin": false, 00:17:56.792 "nvme_io": false, 00:17:56.792 "nvme_io_md": false, 00:17:56.792 "write_zeroes": true, 00:17:56.792 "zcopy": true, 00:17:56.792 "get_zone_info": false, 00:17:56.792 "zone_management": false, 00:17:56.792 "zone_append": false, 00:17:56.792 "compare": false, 00:17:56.792 "compare_and_write": false, 00:17:56.792 "abort": true, 00:17:56.792 "seek_hole": false, 00:17:56.792 "seek_data": false, 00:17:56.792 "copy": true, 00:17:56.792 "nvme_iov_md": false 00:17:56.792 }, 00:17:56.792 "memory_domains": [ 00:17:56.792 { 00:17:56.792 "dma_device_id": "system", 00:17:56.792 "dma_device_type": 1 00:17:56.792 }, 00:17:56.792 { 00:17:56.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.792 "dma_device_type": 2 00:17:56.792 } 00:17:56.792 ], 00:17:56.792 "driver_specific": {} 00:17:56.792 } 00:17:56.792 ] 00:17:56.792 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.792 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:56.792 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:56.792 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:56.792 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:56.792 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:56.792 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:56.792 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:56.792 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:56.792 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:56.792 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.792 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.792 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.792 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.792 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.792 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.792 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.792 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.792 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.792 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.792 "name": "Existed_Raid", 00:17:56.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.792 "strip_size_kb": 64, 00:17:56.792 "state": "configuring", 00:17:56.792 "raid_level": "raid5f", 00:17:56.792 "superblock": false, 00:17:56.792 "num_base_bdevs": 4, 00:17:56.792 "num_base_bdevs_discovered": 3, 00:17:56.792 "num_base_bdevs_operational": 4, 00:17:56.792 "base_bdevs_list": [ 00:17:56.792 { 00:17:56.792 "name": "BaseBdev1", 00:17:56.792 "uuid": "07e4d4be-d99f-47ba-a24e-9bbf3c281231", 00:17:56.792 "is_configured": true, 00:17:56.792 "data_offset": 0, 00:17:56.792 "data_size": 65536 00:17:56.792 }, 00:17:56.792 { 00:17:56.792 "name": "BaseBdev2", 00:17:56.792 "uuid": "772e2a6e-75b4-4088-85b9-94d44c6c6a0e", 00:17:56.792 "is_configured": true, 00:17:56.792 "data_offset": 0, 00:17:56.792 "data_size": 65536 00:17:56.792 }, 00:17:56.792 { 00:17:56.792 "name": "BaseBdev3", 00:17:56.792 "uuid": "783dc1fd-ff3b-4651-b416-ce4ab88c7117", 00:17:56.792 "is_configured": true, 00:17:56.792 "data_offset": 0, 00:17:56.792 "data_size": 65536 00:17:56.792 }, 00:17:56.792 { 00:17:56.792 "name": "BaseBdev4", 00:17:56.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.792 "is_configured": false, 00:17:56.792 "data_offset": 0, 00:17:56.792 "data_size": 0 00:17:56.792 } 00:17:56.792 ] 00:17:56.792 }' 00:17:56.792 16:32:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.792 16:32:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.359 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:57.359 16:32:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.359 16:32:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.359 [2024-11-05 16:32:10.283619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:57.359 [2024-11-05 16:32:10.283799] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:57.359 [2024-11-05 16:32:10.283835] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:57.359 [2024-11-05 16:32:10.284166] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:57.359 [2024-11-05 16:32:10.292197] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:57.359 [2024-11-05 16:32:10.292261] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:57.359 [2024-11-05 16:32:10.292639] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.359 BaseBdev4 00:17:57.359 16:32:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.359 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:57.359 16:32:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:17:57.359 16:32:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:57.359 16:32:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:57.359 16:32:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:57.359 16:32:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:57.359 16:32:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:57.359 16:32:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.359 16:32:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.359 16:32:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.359 16:32:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:57.359 16:32:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.359 16:32:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.359 [ 00:17:57.359 { 00:17:57.359 "name": "BaseBdev4", 00:17:57.359 "aliases": [ 00:17:57.359 "aaa2d4d1-2973-453c-911a-30f129bebe76" 00:17:57.359 ], 00:17:57.359 "product_name": "Malloc disk", 00:17:57.359 "block_size": 512, 00:17:57.359 "num_blocks": 65536, 00:17:57.359 "uuid": "aaa2d4d1-2973-453c-911a-30f129bebe76", 00:17:57.359 "assigned_rate_limits": { 00:17:57.359 "rw_ios_per_sec": 0, 00:17:57.359 "rw_mbytes_per_sec": 0, 00:17:57.359 "r_mbytes_per_sec": 0, 00:17:57.359 "w_mbytes_per_sec": 0 00:17:57.359 }, 00:17:57.359 "claimed": true, 00:17:57.359 "claim_type": "exclusive_write", 00:17:57.359 "zoned": false, 00:17:57.359 "supported_io_types": { 00:17:57.359 "read": true, 00:17:57.359 "write": true, 00:17:57.359 "unmap": true, 00:17:57.359 "flush": true, 00:17:57.359 "reset": true, 00:17:57.359 "nvme_admin": false, 00:17:57.359 "nvme_io": false, 00:17:57.359 "nvme_io_md": false, 00:17:57.359 "write_zeroes": true, 00:17:57.359 "zcopy": true, 00:17:57.359 "get_zone_info": false, 00:17:57.359 "zone_management": false, 00:17:57.359 "zone_append": false, 00:17:57.359 "compare": false, 00:17:57.359 "compare_and_write": false, 00:17:57.359 "abort": true, 00:17:57.359 "seek_hole": false, 00:17:57.359 "seek_data": false, 00:17:57.359 "copy": true, 00:17:57.359 "nvme_iov_md": false 00:17:57.359 }, 00:17:57.359 "memory_domains": [ 00:17:57.359 { 00:17:57.359 "dma_device_id": "system", 00:17:57.359 "dma_device_type": 1 00:17:57.359 }, 00:17:57.359 { 00:17:57.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.359 "dma_device_type": 2 00:17:57.359 } 00:17:57.359 ], 00:17:57.359 "driver_specific": {} 00:17:57.359 } 00:17:57.359 ] 00:17:57.359 16:32:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.359 16:32:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:57.359 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:57.359 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:57.359 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:57.359 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:57.359 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:57.359 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:57.359 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:57.359 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:57.359 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.359 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.359 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.360 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.360 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.360 16:32:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.360 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.360 16:32:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.360 16:32:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.360 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.360 "name": "Existed_Raid", 00:17:57.360 "uuid": "b3edd2a8-faf9-4dab-abd8-b54edff50afc", 00:17:57.360 "strip_size_kb": 64, 00:17:57.360 "state": "online", 00:17:57.360 "raid_level": "raid5f", 00:17:57.360 "superblock": false, 00:17:57.360 "num_base_bdevs": 4, 00:17:57.360 "num_base_bdevs_discovered": 4, 00:17:57.360 "num_base_bdevs_operational": 4, 00:17:57.360 "base_bdevs_list": [ 00:17:57.360 { 00:17:57.360 "name": "BaseBdev1", 00:17:57.360 "uuid": "07e4d4be-d99f-47ba-a24e-9bbf3c281231", 00:17:57.360 "is_configured": true, 00:17:57.360 "data_offset": 0, 00:17:57.360 "data_size": 65536 00:17:57.360 }, 00:17:57.360 { 00:17:57.360 "name": "BaseBdev2", 00:17:57.360 "uuid": "772e2a6e-75b4-4088-85b9-94d44c6c6a0e", 00:17:57.360 "is_configured": true, 00:17:57.360 "data_offset": 0, 00:17:57.360 "data_size": 65536 00:17:57.360 }, 00:17:57.360 { 00:17:57.360 "name": "BaseBdev3", 00:17:57.360 "uuid": "783dc1fd-ff3b-4651-b416-ce4ab88c7117", 00:17:57.360 "is_configured": true, 00:17:57.360 "data_offset": 0, 00:17:57.360 "data_size": 65536 00:17:57.360 }, 00:17:57.360 { 00:17:57.360 "name": "BaseBdev4", 00:17:57.360 "uuid": "aaa2d4d1-2973-453c-911a-30f129bebe76", 00:17:57.360 "is_configured": true, 00:17:57.360 "data_offset": 0, 00:17:57.360 "data_size": 65536 00:17:57.360 } 00:17:57.360 ] 00:17:57.360 }' 00:17:57.360 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.360 16:32:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.926 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:57.926 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:57.926 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:57.926 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:57.926 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:57.926 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:57.926 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:57.926 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:57.926 16:32:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.926 16:32:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.926 [2024-11-05 16:32:10.741552] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:57.927 16:32:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.927 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:57.927 "name": "Existed_Raid", 00:17:57.927 "aliases": [ 00:17:57.927 "b3edd2a8-faf9-4dab-abd8-b54edff50afc" 00:17:57.927 ], 00:17:57.927 "product_name": "Raid Volume", 00:17:57.927 "block_size": 512, 00:17:57.927 "num_blocks": 196608, 00:17:57.927 "uuid": "b3edd2a8-faf9-4dab-abd8-b54edff50afc", 00:17:57.927 "assigned_rate_limits": { 00:17:57.927 "rw_ios_per_sec": 0, 00:17:57.927 "rw_mbytes_per_sec": 0, 00:17:57.927 "r_mbytes_per_sec": 0, 00:17:57.927 "w_mbytes_per_sec": 0 00:17:57.927 }, 00:17:57.927 "claimed": false, 00:17:57.927 "zoned": false, 00:17:57.927 "supported_io_types": { 00:17:57.927 "read": true, 00:17:57.927 "write": true, 00:17:57.927 "unmap": false, 00:17:57.927 "flush": false, 00:17:57.927 "reset": true, 00:17:57.927 "nvme_admin": false, 00:17:57.927 "nvme_io": false, 00:17:57.927 "nvme_io_md": false, 00:17:57.927 "write_zeroes": true, 00:17:57.927 "zcopy": false, 00:17:57.927 "get_zone_info": false, 00:17:57.927 "zone_management": false, 00:17:57.927 "zone_append": false, 00:17:57.927 "compare": false, 00:17:57.927 "compare_and_write": false, 00:17:57.927 "abort": false, 00:17:57.927 "seek_hole": false, 00:17:57.927 "seek_data": false, 00:17:57.927 "copy": false, 00:17:57.927 "nvme_iov_md": false 00:17:57.927 }, 00:17:57.927 "driver_specific": { 00:17:57.927 "raid": { 00:17:57.927 "uuid": "b3edd2a8-faf9-4dab-abd8-b54edff50afc", 00:17:57.927 "strip_size_kb": 64, 00:17:57.927 "state": "online", 00:17:57.927 "raid_level": "raid5f", 00:17:57.927 "superblock": false, 00:17:57.927 "num_base_bdevs": 4, 00:17:57.927 "num_base_bdevs_discovered": 4, 00:17:57.927 "num_base_bdevs_operational": 4, 00:17:57.927 "base_bdevs_list": [ 00:17:57.927 { 00:17:57.927 "name": "BaseBdev1", 00:17:57.927 "uuid": "07e4d4be-d99f-47ba-a24e-9bbf3c281231", 00:17:57.927 "is_configured": true, 00:17:57.927 "data_offset": 0, 00:17:57.927 "data_size": 65536 00:17:57.927 }, 00:17:57.927 { 00:17:57.927 "name": "BaseBdev2", 00:17:57.927 "uuid": "772e2a6e-75b4-4088-85b9-94d44c6c6a0e", 00:17:57.927 "is_configured": true, 00:17:57.927 "data_offset": 0, 00:17:57.927 "data_size": 65536 00:17:57.927 }, 00:17:57.927 { 00:17:57.927 "name": "BaseBdev3", 00:17:57.927 "uuid": "783dc1fd-ff3b-4651-b416-ce4ab88c7117", 00:17:57.927 "is_configured": true, 00:17:57.927 "data_offset": 0, 00:17:57.927 "data_size": 65536 00:17:57.927 }, 00:17:57.927 { 00:17:57.927 "name": "BaseBdev4", 00:17:57.927 "uuid": "aaa2d4d1-2973-453c-911a-30f129bebe76", 00:17:57.927 "is_configured": true, 00:17:57.927 "data_offset": 0, 00:17:57.927 "data_size": 65536 00:17:57.927 } 00:17:57.927 ] 00:17:57.927 } 00:17:57.927 } 00:17:57.927 }' 00:17:57.927 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:57.927 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:57.927 BaseBdev2 00:17:57.927 BaseBdev3 00:17:57.927 BaseBdev4' 00:17:57.927 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.927 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:57.927 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:57.927 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.927 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:57.927 16:32:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.927 16:32:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.927 16:32:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.927 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:57.927 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:57.927 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:57.927 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.927 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:57.927 16:32:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.927 16:32:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.927 16:32:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.927 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:57.927 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:57.927 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:57.927 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:57.927 16:32:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.927 16:32:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.927 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.927 16:32:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.927 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:57.927 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:57.927 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:57.927 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:57.927 16:32:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.927 16:32:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.927 16:32:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.927 16:32:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.206 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:58.206 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:58.206 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:58.206 16:32:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.206 16:32:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.206 [2024-11-05 16:32:11.044766] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:58.206 16:32:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.206 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:58.206 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:58.206 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:58.206 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:58.206 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:58.206 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:58.206 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:58.206 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.206 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:58.206 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:58.206 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:58.206 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.206 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.206 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.206 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.206 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.206 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:58.206 16:32:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.206 16:32:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.206 16:32:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.206 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.206 "name": "Existed_Raid", 00:17:58.206 "uuid": "b3edd2a8-faf9-4dab-abd8-b54edff50afc", 00:17:58.206 "strip_size_kb": 64, 00:17:58.206 "state": "online", 00:17:58.206 "raid_level": "raid5f", 00:17:58.206 "superblock": false, 00:17:58.207 "num_base_bdevs": 4, 00:17:58.207 "num_base_bdevs_discovered": 3, 00:17:58.207 "num_base_bdevs_operational": 3, 00:17:58.207 "base_bdevs_list": [ 00:17:58.207 { 00:17:58.207 "name": null, 00:17:58.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.207 "is_configured": false, 00:17:58.207 "data_offset": 0, 00:17:58.207 "data_size": 65536 00:17:58.207 }, 00:17:58.207 { 00:17:58.207 "name": "BaseBdev2", 00:17:58.207 "uuid": "772e2a6e-75b4-4088-85b9-94d44c6c6a0e", 00:17:58.207 "is_configured": true, 00:17:58.207 "data_offset": 0, 00:17:58.207 "data_size": 65536 00:17:58.207 }, 00:17:58.207 { 00:17:58.207 "name": "BaseBdev3", 00:17:58.207 "uuid": "783dc1fd-ff3b-4651-b416-ce4ab88c7117", 00:17:58.207 "is_configured": true, 00:17:58.207 "data_offset": 0, 00:17:58.207 "data_size": 65536 00:17:58.207 }, 00:17:58.207 { 00:17:58.207 "name": "BaseBdev4", 00:17:58.207 "uuid": "aaa2d4d1-2973-453c-911a-30f129bebe76", 00:17:58.207 "is_configured": true, 00:17:58.207 "data_offset": 0, 00:17:58.207 "data_size": 65536 00:17:58.207 } 00:17:58.207 ] 00:17:58.207 }' 00:17:58.207 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.207 16:32:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.796 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:58.796 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:58.796 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.796 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:58.796 16:32:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.796 16:32:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.796 16:32:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.796 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:58.796 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:58.796 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:58.796 16:32:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.796 16:32:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.796 [2024-11-05 16:32:11.652722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:58.796 [2024-11-05 16:32:11.652940] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:58.796 [2024-11-05 16:32:11.758409] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:58.796 16:32:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.796 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:58.796 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:58.796 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.796 16:32:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.796 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:58.796 16:32:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.796 16:32:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.796 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:58.796 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:58.796 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:58.796 16:32:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.796 16:32:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.796 [2024-11-05 16:32:11.814345] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:59.056 16:32:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.056 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:59.056 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:59.056 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.056 16:32:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.056 16:32:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.056 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:59.056 16:32:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.056 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:59.056 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:59.056 16:32:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:59.056 16:32:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.056 16:32:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.056 [2024-11-05 16:32:11.959959] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:59.056 [2024-11-05 16:32:11.960078] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:59.056 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.056 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:59.056 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:59.056 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.056 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:59.056 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.056 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.056 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.056 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:59.056 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:59.056 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:59.056 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:59.056 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:59.056 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:59.056 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.056 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.316 BaseBdev2 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.316 [ 00:17:59.316 { 00:17:59.316 "name": "BaseBdev2", 00:17:59.316 "aliases": [ 00:17:59.316 "cec13793-2fd0-440c-842a-23fa1e5ddf96" 00:17:59.316 ], 00:17:59.316 "product_name": "Malloc disk", 00:17:59.316 "block_size": 512, 00:17:59.316 "num_blocks": 65536, 00:17:59.316 "uuid": "cec13793-2fd0-440c-842a-23fa1e5ddf96", 00:17:59.316 "assigned_rate_limits": { 00:17:59.316 "rw_ios_per_sec": 0, 00:17:59.316 "rw_mbytes_per_sec": 0, 00:17:59.316 "r_mbytes_per_sec": 0, 00:17:59.316 "w_mbytes_per_sec": 0 00:17:59.316 }, 00:17:59.316 "claimed": false, 00:17:59.316 "zoned": false, 00:17:59.316 "supported_io_types": { 00:17:59.316 "read": true, 00:17:59.316 "write": true, 00:17:59.316 "unmap": true, 00:17:59.316 "flush": true, 00:17:59.316 "reset": true, 00:17:59.316 "nvme_admin": false, 00:17:59.316 "nvme_io": false, 00:17:59.316 "nvme_io_md": false, 00:17:59.316 "write_zeroes": true, 00:17:59.316 "zcopy": true, 00:17:59.316 "get_zone_info": false, 00:17:59.316 "zone_management": false, 00:17:59.316 "zone_append": false, 00:17:59.316 "compare": false, 00:17:59.316 "compare_and_write": false, 00:17:59.316 "abort": true, 00:17:59.316 "seek_hole": false, 00:17:59.316 "seek_data": false, 00:17:59.316 "copy": true, 00:17:59.316 "nvme_iov_md": false 00:17:59.316 }, 00:17:59.316 "memory_domains": [ 00:17:59.316 { 00:17:59.316 "dma_device_id": "system", 00:17:59.316 "dma_device_type": 1 00:17:59.316 }, 00:17:59.316 { 00:17:59.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.316 "dma_device_type": 2 00:17:59.316 } 00:17:59.316 ], 00:17:59.316 "driver_specific": {} 00:17:59.316 } 00:17:59.316 ] 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.316 BaseBdev3 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.316 [ 00:17:59.316 { 00:17:59.316 "name": "BaseBdev3", 00:17:59.316 "aliases": [ 00:17:59.316 "bdc0db46-f00f-4bd0-b3fa-329d344986cc" 00:17:59.316 ], 00:17:59.316 "product_name": "Malloc disk", 00:17:59.316 "block_size": 512, 00:17:59.316 "num_blocks": 65536, 00:17:59.316 "uuid": "bdc0db46-f00f-4bd0-b3fa-329d344986cc", 00:17:59.316 "assigned_rate_limits": { 00:17:59.316 "rw_ios_per_sec": 0, 00:17:59.316 "rw_mbytes_per_sec": 0, 00:17:59.316 "r_mbytes_per_sec": 0, 00:17:59.316 "w_mbytes_per_sec": 0 00:17:59.316 }, 00:17:59.316 "claimed": false, 00:17:59.316 "zoned": false, 00:17:59.316 "supported_io_types": { 00:17:59.316 "read": true, 00:17:59.316 "write": true, 00:17:59.316 "unmap": true, 00:17:59.316 "flush": true, 00:17:59.316 "reset": true, 00:17:59.316 "nvme_admin": false, 00:17:59.316 "nvme_io": false, 00:17:59.316 "nvme_io_md": false, 00:17:59.316 "write_zeroes": true, 00:17:59.316 "zcopy": true, 00:17:59.316 "get_zone_info": false, 00:17:59.316 "zone_management": false, 00:17:59.316 "zone_append": false, 00:17:59.316 "compare": false, 00:17:59.316 "compare_and_write": false, 00:17:59.316 "abort": true, 00:17:59.316 "seek_hole": false, 00:17:59.316 "seek_data": false, 00:17:59.316 "copy": true, 00:17:59.316 "nvme_iov_md": false 00:17:59.316 }, 00:17:59.316 "memory_domains": [ 00:17:59.316 { 00:17:59.316 "dma_device_id": "system", 00:17:59.316 "dma_device_type": 1 00:17:59.316 }, 00:17:59.316 { 00:17:59.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.316 "dma_device_type": 2 00:17:59.316 } 00:17:59.316 ], 00:17:59.316 "driver_specific": {} 00:17:59.316 } 00:17:59.316 ] 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.316 BaseBdev4 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.316 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:59.317 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.317 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.317 [ 00:17:59.317 { 00:17:59.317 "name": "BaseBdev4", 00:17:59.317 "aliases": [ 00:17:59.317 "dbf89031-2468-41fd-a1de-c57db9815d97" 00:17:59.317 ], 00:17:59.317 "product_name": "Malloc disk", 00:17:59.317 "block_size": 512, 00:17:59.317 "num_blocks": 65536, 00:17:59.317 "uuid": "dbf89031-2468-41fd-a1de-c57db9815d97", 00:17:59.317 "assigned_rate_limits": { 00:17:59.317 "rw_ios_per_sec": 0, 00:17:59.317 "rw_mbytes_per_sec": 0, 00:17:59.317 "r_mbytes_per_sec": 0, 00:17:59.317 "w_mbytes_per_sec": 0 00:17:59.317 }, 00:17:59.317 "claimed": false, 00:17:59.317 "zoned": false, 00:17:59.317 "supported_io_types": { 00:17:59.317 "read": true, 00:17:59.317 "write": true, 00:17:59.317 "unmap": true, 00:17:59.317 "flush": true, 00:17:59.317 "reset": true, 00:17:59.317 "nvme_admin": false, 00:17:59.317 "nvme_io": false, 00:17:59.317 "nvme_io_md": false, 00:17:59.317 "write_zeroes": true, 00:17:59.317 "zcopy": true, 00:17:59.317 "get_zone_info": false, 00:17:59.317 "zone_management": false, 00:17:59.317 "zone_append": false, 00:17:59.317 "compare": false, 00:17:59.317 "compare_and_write": false, 00:17:59.317 "abort": true, 00:17:59.317 "seek_hole": false, 00:17:59.317 "seek_data": false, 00:17:59.317 "copy": true, 00:17:59.317 "nvme_iov_md": false 00:17:59.317 }, 00:17:59.317 "memory_domains": [ 00:17:59.317 { 00:17:59.317 "dma_device_id": "system", 00:17:59.317 "dma_device_type": 1 00:17:59.317 }, 00:17:59.317 { 00:17:59.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.317 "dma_device_type": 2 00:17:59.317 } 00:17:59.317 ], 00:17:59.317 "driver_specific": {} 00:17:59.317 } 00:17:59.317 ] 00:17:59.317 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.317 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:59.317 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:59.317 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:59.317 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:59.317 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.317 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.317 [2024-11-05 16:32:12.391936] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:59.317 [2024-11-05 16:32:12.392085] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:59.317 [2024-11-05 16:32:12.392137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:59.317 [2024-11-05 16:32:12.394287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:59.317 [2024-11-05 16:32:12.394400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:59.317 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.317 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:59.317 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:59.317 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:59.317 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:59.317 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:59.317 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:59.317 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.317 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.317 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.317 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.317 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:59.576 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.576 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.576 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.576 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.576 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.576 "name": "Existed_Raid", 00:17:59.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.576 "strip_size_kb": 64, 00:17:59.576 "state": "configuring", 00:17:59.576 "raid_level": "raid5f", 00:17:59.576 "superblock": false, 00:17:59.576 "num_base_bdevs": 4, 00:17:59.576 "num_base_bdevs_discovered": 3, 00:17:59.576 "num_base_bdevs_operational": 4, 00:17:59.576 "base_bdevs_list": [ 00:17:59.576 { 00:17:59.576 "name": "BaseBdev1", 00:17:59.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.576 "is_configured": false, 00:17:59.576 "data_offset": 0, 00:17:59.576 "data_size": 0 00:17:59.576 }, 00:17:59.576 { 00:17:59.576 "name": "BaseBdev2", 00:17:59.576 "uuid": "cec13793-2fd0-440c-842a-23fa1e5ddf96", 00:17:59.576 "is_configured": true, 00:17:59.576 "data_offset": 0, 00:17:59.576 "data_size": 65536 00:17:59.576 }, 00:17:59.576 { 00:17:59.576 "name": "BaseBdev3", 00:17:59.576 "uuid": "bdc0db46-f00f-4bd0-b3fa-329d344986cc", 00:17:59.576 "is_configured": true, 00:17:59.576 "data_offset": 0, 00:17:59.576 "data_size": 65536 00:17:59.576 }, 00:17:59.576 { 00:17:59.576 "name": "BaseBdev4", 00:17:59.576 "uuid": "dbf89031-2468-41fd-a1de-c57db9815d97", 00:17:59.576 "is_configured": true, 00:17:59.576 "data_offset": 0, 00:17:59.576 "data_size": 65536 00:17:59.576 } 00:17:59.576 ] 00:17:59.576 }' 00:17:59.576 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.576 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.835 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:59.835 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.835 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.835 [2024-11-05 16:32:12.795573] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:59.835 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.835 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:59.835 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:59.835 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:59.835 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:59.835 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:59.835 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:59.835 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.835 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.835 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.835 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.835 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.835 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.835 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.835 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:59.835 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.835 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.835 "name": "Existed_Raid", 00:17:59.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.835 "strip_size_kb": 64, 00:17:59.835 "state": "configuring", 00:17:59.835 "raid_level": "raid5f", 00:17:59.835 "superblock": false, 00:17:59.835 "num_base_bdevs": 4, 00:17:59.835 "num_base_bdevs_discovered": 2, 00:17:59.835 "num_base_bdevs_operational": 4, 00:17:59.835 "base_bdevs_list": [ 00:17:59.835 { 00:17:59.835 "name": "BaseBdev1", 00:17:59.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.835 "is_configured": false, 00:17:59.835 "data_offset": 0, 00:17:59.835 "data_size": 0 00:17:59.835 }, 00:17:59.835 { 00:17:59.835 "name": null, 00:17:59.835 "uuid": "cec13793-2fd0-440c-842a-23fa1e5ddf96", 00:17:59.835 "is_configured": false, 00:17:59.835 "data_offset": 0, 00:17:59.835 "data_size": 65536 00:17:59.835 }, 00:17:59.835 { 00:17:59.835 "name": "BaseBdev3", 00:17:59.835 "uuid": "bdc0db46-f00f-4bd0-b3fa-329d344986cc", 00:17:59.835 "is_configured": true, 00:17:59.835 "data_offset": 0, 00:17:59.835 "data_size": 65536 00:17:59.835 }, 00:17:59.835 { 00:17:59.835 "name": "BaseBdev4", 00:17:59.835 "uuid": "dbf89031-2468-41fd-a1de-c57db9815d97", 00:17:59.835 "is_configured": true, 00:17:59.835 "data_offset": 0, 00:17:59.835 "data_size": 65536 00:17:59.835 } 00:17:59.835 ] 00:17:59.835 }' 00:17:59.835 16:32:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.835 16:32:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.404 16:32:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.404 16:32:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.404 16:32:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.404 16:32:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:00.404 16:32:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.404 16:32:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:00.404 16:32:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:00.404 16:32:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.404 16:32:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.404 [2024-11-05 16:32:13.356885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:00.404 BaseBdev1 00:18:00.404 16:32:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.404 16:32:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:00.404 16:32:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:00.404 16:32:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:00.404 16:32:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:00.404 16:32:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:00.404 16:32:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:00.404 16:32:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:00.404 16:32:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.404 16:32:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.404 16:32:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.404 16:32:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:00.404 16:32:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.404 16:32:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.404 [ 00:18:00.404 { 00:18:00.404 "name": "BaseBdev1", 00:18:00.404 "aliases": [ 00:18:00.404 "9614eda9-bbff-4626-818e-e491b58f6075" 00:18:00.404 ], 00:18:00.404 "product_name": "Malloc disk", 00:18:00.404 "block_size": 512, 00:18:00.404 "num_blocks": 65536, 00:18:00.404 "uuid": "9614eda9-bbff-4626-818e-e491b58f6075", 00:18:00.404 "assigned_rate_limits": { 00:18:00.404 "rw_ios_per_sec": 0, 00:18:00.404 "rw_mbytes_per_sec": 0, 00:18:00.404 "r_mbytes_per_sec": 0, 00:18:00.404 "w_mbytes_per_sec": 0 00:18:00.404 }, 00:18:00.404 "claimed": true, 00:18:00.404 "claim_type": "exclusive_write", 00:18:00.404 "zoned": false, 00:18:00.404 "supported_io_types": { 00:18:00.404 "read": true, 00:18:00.404 "write": true, 00:18:00.404 "unmap": true, 00:18:00.404 "flush": true, 00:18:00.404 "reset": true, 00:18:00.404 "nvme_admin": false, 00:18:00.404 "nvme_io": false, 00:18:00.404 "nvme_io_md": false, 00:18:00.404 "write_zeroes": true, 00:18:00.404 "zcopy": true, 00:18:00.404 "get_zone_info": false, 00:18:00.404 "zone_management": false, 00:18:00.404 "zone_append": false, 00:18:00.404 "compare": false, 00:18:00.404 "compare_and_write": false, 00:18:00.404 "abort": true, 00:18:00.404 "seek_hole": false, 00:18:00.404 "seek_data": false, 00:18:00.404 "copy": true, 00:18:00.404 "nvme_iov_md": false 00:18:00.404 }, 00:18:00.404 "memory_domains": [ 00:18:00.404 { 00:18:00.404 "dma_device_id": "system", 00:18:00.404 "dma_device_type": 1 00:18:00.404 }, 00:18:00.404 { 00:18:00.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.404 "dma_device_type": 2 00:18:00.404 } 00:18:00.404 ], 00:18:00.404 "driver_specific": {} 00:18:00.404 } 00:18:00.404 ] 00:18:00.404 16:32:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.404 16:32:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:00.404 16:32:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:00.404 16:32:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:00.404 16:32:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:00.404 16:32:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:00.404 16:32:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:00.404 16:32:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:00.404 16:32:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.404 16:32:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.404 16:32:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.404 16:32:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.404 16:32:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.404 16:32:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.404 16:32:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.404 16:32:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.404 16:32:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.404 16:32:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.404 "name": "Existed_Raid", 00:18:00.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.404 "strip_size_kb": 64, 00:18:00.404 "state": "configuring", 00:18:00.404 "raid_level": "raid5f", 00:18:00.404 "superblock": false, 00:18:00.404 "num_base_bdevs": 4, 00:18:00.404 "num_base_bdevs_discovered": 3, 00:18:00.404 "num_base_bdevs_operational": 4, 00:18:00.404 "base_bdevs_list": [ 00:18:00.404 { 00:18:00.404 "name": "BaseBdev1", 00:18:00.404 "uuid": "9614eda9-bbff-4626-818e-e491b58f6075", 00:18:00.404 "is_configured": true, 00:18:00.404 "data_offset": 0, 00:18:00.404 "data_size": 65536 00:18:00.404 }, 00:18:00.404 { 00:18:00.404 "name": null, 00:18:00.404 "uuid": "cec13793-2fd0-440c-842a-23fa1e5ddf96", 00:18:00.404 "is_configured": false, 00:18:00.404 "data_offset": 0, 00:18:00.404 "data_size": 65536 00:18:00.404 }, 00:18:00.404 { 00:18:00.404 "name": "BaseBdev3", 00:18:00.404 "uuid": "bdc0db46-f00f-4bd0-b3fa-329d344986cc", 00:18:00.404 "is_configured": true, 00:18:00.404 "data_offset": 0, 00:18:00.404 "data_size": 65536 00:18:00.404 }, 00:18:00.404 { 00:18:00.404 "name": "BaseBdev4", 00:18:00.404 "uuid": "dbf89031-2468-41fd-a1de-c57db9815d97", 00:18:00.404 "is_configured": true, 00:18:00.404 "data_offset": 0, 00:18:00.404 "data_size": 65536 00:18:00.405 } 00:18:00.405 ] 00:18:00.405 }' 00:18:00.405 16:32:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.405 16:32:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.972 16:32:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:00.972 16:32:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.972 16:32:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.972 16:32:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.972 16:32:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.972 16:32:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:00.972 16:32:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:00.972 16:32:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.972 16:32:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.972 [2024-11-05 16:32:13.888336] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:00.972 16:32:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.972 16:32:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:00.972 16:32:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:00.972 16:32:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:00.972 16:32:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:00.972 16:32:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:00.973 16:32:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:00.973 16:32:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.973 16:32:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.973 16:32:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.973 16:32:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.973 16:32:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.973 16:32:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.973 16:32:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.973 16:32:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.973 16:32:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.973 16:32:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.973 "name": "Existed_Raid", 00:18:00.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.973 "strip_size_kb": 64, 00:18:00.973 "state": "configuring", 00:18:00.973 "raid_level": "raid5f", 00:18:00.973 "superblock": false, 00:18:00.973 "num_base_bdevs": 4, 00:18:00.973 "num_base_bdevs_discovered": 2, 00:18:00.973 "num_base_bdevs_operational": 4, 00:18:00.973 "base_bdevs_list": [ 00:18:00.973 { 00:18:00.973 "name": "BaseBdev1", 00:18:00.973 "uuid": "9614eda9-bbff-4626-818e-e491b58f6075", 00:18:00.973 "is_configured": true, 00:18:00.973 "data_offset": 0, 00:18:00.973 "data_size": 65536 00:18:00.973 }, 00:18:00.973 { 00:18:00.973 "name": null, 00:18:00.973 "uuid": "cec13793-2fd0-440c-842a-23fa1e5ddf96", 00:18:00.973 "is_configured": false, 00:18:00.973 "data_offset": 0, 00:18:00.973 "data_size": 65536 00:18:00.973 }, 00:18:00.973 { 00:18:00.973 "name": null, 00:18:00.973 "uuid": "bdc0db46-f00f-4bd0-b3fa-329d344986cc", 00:18:00.973 "is_configured": false, 00:18:00.973 "data_offset": 0, 00:18:00.973 "data_size": 65536 00:18:00.973 }, 00:18:00.973 { 00:18:00.973 "name": "BaseBdev4", 00:18:00.973 "uuid": "dbf89031-2468-41fd-a1de-c57db9815d97", 00:18:00.973 "is_configured": true, 00:18:00.973 "data_offset": 0, 00:18:00.973 "data_size": 65536 00:18:00.973 } 00:18:00.973 ] 00:18:00.973 }' 00:18:00.973 16:32:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.973 16:32:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.231 16:32:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.231 16:32:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:01.231 16:32:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.231 16:32:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.490 16:32:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.490 16:32:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:01.490 16:32:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:01.490 16:32:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.490 16:32:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.490 [2024-11-05 16:32:14.363553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:01.490 16:32:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.490 16:32:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:01.490 16:32:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:01.490 16:32:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:01.490 16:32:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:01.490 16:32:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:01.490 16:32:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:01.490 16:32:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.490 16:32:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.490 16:32:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.490 16:32:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.490 16:32:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.490 16:32:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.490 16:32:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.490 16:32:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.490 16:32:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.490 16:32:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.490 "name": "Existed_Raid", 00:18:01.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.490 "strip_size_kb": 64, 00:18:01.490 "state": "configuring", 00:18:01.490 "raid_level": "raid5f", 00:18:01.490 "superblock": false, 00:18:01.490 "num_base_bdevs": 4, 00:18:01.490 "num_base_bdevs_discovered": 3, 00:18:01.490 "num_base_bdevs_operational": 4, 00:18:01.490 "base_bdevs_list": [ 00:18:01.490 { 00:18:01.490 "name": "BaseBdev1", 00:18:01.490 "uuid": "9614eda9-bbff-4626-818e-e491b58f6075", 00:18:01.490 "is_configured": true, 00:18:01.490 "data_offset": 0, 00:18:01.490 "data_size": 65536 00:18:01.490 }, 00:18:01.490 { 00:18:01.490 "name": null, 00:18:01.490 "uuid": "cec13793-2fd0-440c-842a-23fa1e5ddf96", 00:18:01.490 "is_configured": false, 00:18:01.490 "data_offset": 0, 00:18:01.490 "data_size": 65536 00:18:01.490 }, 00:18:01.490 { 00:18:01.490 "name": "BaseBdev3", 00:18:01.490 "uuid": "bdc0db46-f00f-4bd0-b3fa-329d344986cc", 00:18:01.490 "is_configured": true, 00:18:01.490 "data_offset": 0, 00:18:01.490 "data_size": 65536 00:18:01.490 }, 00:18:01.490 { 00:18:01.490 "name": "BaseBdev4", 00:18:01.490 "uuid": "dbf89031-2468-41fd-a1de-c57db9815d97", 00:18:01.490 "is_configured": true, 00:18:01.490 "data_offset": 0, 00:18:01.490 "data_size": 65536 00:18:01.490 } 00:18:01.490 ] 00:18:01.490 }' 00:18:01.490 16:32:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.490 16:32:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.749 16:32:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:01.749 16:32:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.749 16:32:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.749 16:32:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.749 16:32:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.749 16:32:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:01.749 16:32:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:01.749 16:32:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.749 16:32:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.749 [2024-11-05 16:32:14.810815] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:02.007 16:32:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.007 16:32:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:02.007 16:32:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:02.007 16:32:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:02.007 16:32:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:02.007 16:32:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:02.007 16:32:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:02.007 16:32:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.007 16:32:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.007 16:32:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.007 16:32:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.007 16:32:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.007 16:32:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:02.007 16:32:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.007 16:32:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.007 16:32:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.007 16:32:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.007 "name": "Existed_Raid", 00:18:02.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.007 "strip_size_kb": 64, 00:18:02.007 "state": "configuring", 00:18:02.007 "raid_level": "raid5f", 00:18:02.007 "superblock": false, 00:18:02.007 "num_base_bdevs": 4, 00:18:02.007 "num_base_bdevs_discovered": 2, 00:18:02.007 "num_base_bdevs_operational": 4, 00:18:02.007 "base_bdevs_list": [ 00:18:02.007 { 00:18:02.007 "name": null, 00:18:02.007 "uuid": "9614eda9-bbff-4626-818e-e491b58f6075", 00:18:02.007 "is_configured": false, 00:18:02.007 "data_offset": 0, 00:18:02.007 "data_size": 65536 00:18:02.007 }, 00:18:02.007 { 00:18:02.007 "name": null, 00:18:02.007 "uuid": "cec13793-2fd0-440c-842a-23fa1e5ddf96", 00:18:02.007 "is_configured": false, 00:18:02.007 "data_offset": 0, 00:18:02.007 "data_size": 65536 00:18:02.007 }, 00:18:02.007 { 00:18:02.007 "name": "BaseBdev3", 00:18:02.007 "uuid": "bdc0db46-f00f-4bd0-b3fa-329d344986cc", 00:18:02.007 "is_configured": true, 00:18:02.007 "data_offset": 0, 00:18:02.007 "data_size": 65536 00:18:02.007 }, 00:18:02.007 { 00:18:02.007 "name": "BaseBdev4", 00:18:02.007 "uuid": "dbf89031-2468-41fd-a1de-c57db9815d97", 00:18:02.007 "is_configured": true, 00:18:02.007 "data_offset": 0, 00:18:02.007 "data_size": 65536 00:18:02.007 } 00:18:02.007 ] 00:18:02.007 }' 00:18:02.007 16:32:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.007 16:32:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.574 16:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.574 16:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:02.574 16:32:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.574 16:32:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.574 16:32:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.574 16:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:02.574 16:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:02.574 16:32:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.574 16:32:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.574 [2024-11-05 16:32:15.419324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:02.574 16:32:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.574 16:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:02.574 16:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:02.574 16:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:02.574 16:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:02.574 16:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:02.574 16:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:02.574 16:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.574 16:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.574 16:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.574 16:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.574 16:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.574 16:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:02.574 16:32:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.574 16:32:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.574 16:32:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.574 16:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.574 "name": "Existed_Raid", 00:18:02.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.574 "strip_size_kb": 64, 00:18:02.574 "state": "configuring", 00:18:02.574 "raid_level": "raid5f", 00:18:02.574 "superblock": false, 00:18:02.574 "num_base_bdevs": 4, 00:18:02.574 "num_base_bdevs_discovered": 3, 00:18:02.574 "num_base_bdevs_operational": 4, 00:18:02.574 "base_bdevs_list": [ 00:18:02.574 { 00:18:02.574 "name": null, 00:18:02.574 "uuid": "9614eda9-bbff-4626-818e-e491b58f6075", 00:18:02.574 "is_configured": false, 00:18:02.574 "data_offset": 0, 00:18:02.574 "data_size": 65536 00:18:02.574 }, 00:18:02.574 { 00:18:02.574 "name": "BaseBdev2", 00:18:02.574 "uuid": "cec13793-2fd0-440c-842a-23fa1e5ddf96", 00:18:02.574 "is_configured": true, 00:18:02.574 "data_offset": 0, 00:18:02.574 "data_size": 65536 00:18:02.574 }, 00:18:02.574 { 00:18:02.574 "name": "BaseBdev3", 00:18:02.574 "uuid": "bdc0db46-f00f-4bd0-b3fa-329d344986cc", 00:18:02.574 "is_configured": true, 00:18:02.574 "data_offset": 0, 00:18:02.574 "data_size": 65536 00:18:02.574 }, 00:18:02.574 { 00:18:02.574 "name": "BaseBdev4", 00:18:02.574 "uuid": "dbf89031-2468-41fd-a1de-c57db9815d97", 00:18:02.574 "is_configured": true, 00:18:02.574 "data_offset": 0, 00:18:02.574 "data_size": 65536 00:18:02.574 } 00:18:02.574 ] 00:18:02.574 }' 00:18:02.574 16:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.574 16:32:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.834 16:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:02.834 16:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.834 16:32:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.834 16:32:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.834 16:32:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.834 16:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:02.834 16:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:02.834 16:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.834 16:32:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.834 16:32:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.093 16:32:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.093 16:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9614eda9-bbff-4626-818e-e491b58f6075 00:18:03.093 16:32:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.093 16:32:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.093 [2024-11-05 16:32:15.988601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:03.093 [2024-11-05 16:32:15.988678] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:03.093 [2024-11-05 16:32:15.988688] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:03.093 [2024-11-05 16:32:15.988991] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:03.093 [2024-11-05 16:32:15.995914] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:03.093 [2024-11-05 16:32:15.995948] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:03.093 [2024-11-05 16:32:15.996245] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:03.093 NewBaseBdev 00:18:03.093 16:32:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.093 16:32:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:03.093 16:32:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:18:03.093 16:32:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:03.093 16:32:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:03.093 16:32:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:03.093 16:32:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:03.093 16:32:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:03.093 16:32:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.093 16:32:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.093 16:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.093 16:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:03.093 16:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.093 16:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.093 [ 00:18:03.093 { 00:18:03.093 "name": "NewBaseBdev", 00:18:03.093 "aliases": [ 00:18:03.093 "9614eda9-bbff-4626-818e-e491b58f6075" 00:18:03.093 ], 00:18:03.093 "product_name": "Malloc disk", 00:18:03.093 "block_size": 512, 00:18:03.093 "num_blocks": 65536, 00:18:03.093 "uuid": "9614eda9-bbff-4626-818e-e491b58f6075", 00:18:03.093 "assigned_rate_limits": { 00:18:03.093 "rw_ios_per_sec": 0, 00:18:03.093 "rw_mbytes_per_sec": 0, 00:18:03.093 "r_mbytes_per_sec": 0, 00:18:03.093 "w_mbytes_per_sec": 0 00:18:03.093 }, 00:18:03.093 "claimed": true, 00:18:03.093 "claim_type": "exclusive_write", 00:18:03.093 "zoned": false, 00:18:03.093 "supported_io_types": { 00:18:03.093 "read": true, 00:18:03.093 "write": true, 00:18:03.093 "unmap": true, 00:18:03.093 "flush": true, 00:18:03.093 "reset": true, 00:18:03.093 "nvme_admin": false, 00:18:03.093 "nvme_io": false, 00:18:03.093 "nvme_io_md": false, 00:18:03.093 "write_zeroes": true, 00:18:03.093 "zcopy": true, 00:18:03.093 "get_zone_info": false, 00:18:03.093 "zone_management": false, 00:18:03.093 "zone_append": false, 00:18:03.093 "compare": false, 00:18:03.093 "compare_and_write": false, 00:18:03.093 "abort": true, 00:18:03.093 "seek_hole": false, 00:18:03.093 "seek_data": false, 00:18:03.093 "copy": true, 00:18:03.093 "nvme_iov_md": false 00:18:03.093 }, 00:18:03.093 "memory_domains": [ 00:18:03.093 { 00:18:03.093 "dma_device_id": "system", 00:18:03.093 "dma_device_type": 1 00:18:03.093 }, 00:18:03.093 { 00:18:03.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.093 "dma_device_type": 2 00:18:03.093 } 00:18:03.093 ], 00:18:03.093 "driver_specific": {} 00:18:03.093 } 00:18:03.093 ] 00:18:03.093 16:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.093 16:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:03.093 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:18:03.093 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:03.093 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.093 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:03.093 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:03.093 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:03.093 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.093 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.093 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.093 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.093 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:03.093 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.093 16:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.093 16:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.093 16:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.093 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.093 "name": "Existed_Raid", 00:18:03.093 "uuid": "74d51293-f663-4848-ad54-8ba481e49721", 00:18:03.093 "strip_size_kb": 64, 00:18:03.093 "state": "online", 00:18:03.093 "raid_level": "raid5f", 00:18:03.093 "superblock": false, 00:18:03.093 "num_base_bdevs": 4, 00:18:03.093 "num_base_bdevs_discovered": 4, 00:18:03.093 "num_base_bdevs_operational": 4, 00:18:03.093 "base_bdevs_list": [ 00:18:03.093 { 00:18:03.093 "name": "NewBaseBdev", 00:18:03.093 "uuid": "9614eda9-bbff-4626-818e-e491b58f6075", 00:18:03.093 "is_configured": true, 00:18:03.093 "data_offset": 0, 00:18:03.093 "data_size": 65536 00:18:03.093 }, 00:18:03.093 { 00:18:03.093 "name": "BaseBdev2", 00:18:03.093 "uuid": "cec13793-2fd0-440c-842a-23fa1e5ddf96", 00:18:03.093 "is_configured": true, 00:18:03.093 "data_offset": 0, 00:18:03.093 "data_size": 65536 00:18:03.093 }, 00:18:03.093 { 00:18:03.093 "name": "BaseBdev3", 00:18:03.093 "uuid": "bdc0db46-f00f-4bd0-b3fa-329d344986cc", 00:18:03.093 "is_configured": true, 00:18:03.093 "data_offset": 0, 00:18:03.093 "data_size": 65536 00:18:03.093 }, 00:18:03.093 { 00:18:03.093 "name": "BaseBdev4", 00:18:03.093 "uuid": "dbf89031-2468-41fd-a1de-c57db9815d97", 00:18:03.093 "is_configured": true, 00:18:03.093 "data_offset": 0, 00:18:03.094 "data_size": 65536 00:18:03.094 } 00:18:03.094 ] 00:18:03.094 }' 00:18:03.094 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.094 16:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.662 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:03.662 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:03.662 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:03.662 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:03.662 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:03.662 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:03.662 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:03.662 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:03.662 16:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.662 16:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.662 [2024-11-05 16:32:16.501495] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:03.662 16:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.662 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:03.662 "name": "Existed_Raid", 00:18:03.662 "aliases": [ 00:18:03.662 "74d51293-f663-4848-ad54-8ba481e49721" 00:18:03.662 ], 00:18:03.662 "product_name": "Raid Volume", 00:18:03.662 "block_size": 512, 00:18:03.662 "num_blocks": 196608, 00:18:03.662 "uuid": "74d51293-f663-4848-ad54-8ba481e49721", 00:18:03.662 "assigned_rate_limits": { 00:18:03.662 "rw_ios_per_sec": 0, 00:18:03.662 "rw_mbytes_per_sec": 0, 00:18:03.662 "r_mbytes_per_sec": 0, 00:18:03.662 "w_mbytes_per_sec": 0 00:18:03.662 }, 00:18:03.662 "claimed": false, 00:18:03.662 "zoned": false, 00:18:03.662 "supported_io_types": { 00:18:03.662 "read": true, 00:18:03.662 "write": true, 00:18:03.662 "unmap": false, 00:18:03.662 "flush": false, 00:18:03.662 "reset": true, 00:18:03.662 "nvme_admin": false, 00:18:03.662 "nvme_io": false, 00:18:03.662 "nvme_io_md": false, 00:18:03.662 "write_zeroes": true, 00:18:03.662 "zcopy": false, 00:18:03.662 "get_zone_info": false, 00:18:03.662 "zone_management": false, 00:18:03.662 "zone_append": false, 00:18:03.662 "compare": false, 00:18:03.662 "compare_and_write": false, 00:18:03.662 "abort": false, 00:18:03.662 "seek_hole": false, 00:18:03.662 "seek_data": false, 00:18:03.662 "copy": false, 00:18:03.662 "nvme_iov_md": false 00:18:03.662 }, 00:18:03.662 "driver_specific": { 00:18:03.662 "raid": { 00:18:03.662 "uuid": "74d51293-f663-4848-ad54-8ba481e49721", 00:18:03.662 "strip_size_kb": 64, 00:18:03.662 "state": "online", 00:18:03.662 "raid_level": "raid5f", 00:18:03.662 "superblock": false, 00:18:03.662 "num_base_bdevs": 4, 00:18:03.662 "num_base_bdevs_discovered": 4, 00:18:03.662 "num_base_bdevs_operational": 4, 00:18:03.662 "base_bdevs_list": [ 00:18:03.662 { 00:18:03.662 "name": "NewBaseBdev", 00:18:03.662 "uuid": "9614eda9-bbff-4626-818e-e491b58f6075", 00:18:03.662 "is_configured": true, 00:18:03.662 "data_offset": 0, 00:18:03.662 "data_size": 65536 00:18:03.662 }, 00:18:03.662 { 00:18:03.662 "name": "BaseBdev2", 00:18:03.662 "uuid": "cec13793-2fd0-440c-842a-23fa1e5ddf96", 00:18:03.662 "is_configured": true, 00:18:03.662 "data_offset": 0, 00:18:03.662 "data_size": 65536 00:18:03.662 }, 00:18:03.662 { 00:18:03.662 "name": "BaseBdev3", 00:18:03.662 "uuid": "bdc0db46-f00f-4bd0-b3fa-329d344986cc", 00:18:03.662 "is_configured": true, 00:18:03.662 "data_offset": 0, 00:18:03.662 "data_size": 65536 00:18:03.662 }, 00:18:03.662 { 00:18:03.662 "name": "BaseBdev4", 00:18:03.662 "uuid": "dbf89031-2468-41fd-a1de-c57db9815d97", 00:18:03.662 "is_configured": true, 00:18:03.662 "data_offset": 0, 00:18:03.662 "data_size": 65536 00:18:03.662 } 00:18:03.662 ] 00:18:03.662 } 00:18:03.662 } 00:18:03.662 }' 00:18:03.662 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:03.662 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:03.662 BaseBdev2 00:18:03.662 BaseBdev3 00:18:03.662 BaseBdev4' 00:18:03.662 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:03.662 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:03.662 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:03.662 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:03.662 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:03.662 16:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.662 16:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.662 16:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.662 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:03.662 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:03.662 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:03.662 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:03.662 16:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.662 16:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.662 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:03.662 16:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.662 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:03.662 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:03.662 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:03.662 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:03.662 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:03.662 16:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.662 16:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.921 16:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.921 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:03.921 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:03.921 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:03.921 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:03.921 16:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.921 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:03.921 16:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.921 16:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.921 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:03.921 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:03.921 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:03.921 16:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.921 16:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.921 [2024-11-05 16:32:16.804658] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:03.921 [2024-11-05 16:32:16.804780] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:03.921 [2024-11-05 16:32:16.804887] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:03.921 [2024-11-05 16:32:16.805231] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:03.921 [2024-11-05 16:32:16.805245] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:03.921 16:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.921 16:32:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83127 00:18:03.921 16:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 83127 ']' 00:18:03.921 16:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # kill -0 83127 00:18:03.921 16:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # uname 00:18:03.921 16:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:03.921 16:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83127 00:18:03.921 killing process with pid 83127 00:18:03.921 16:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:03.921 16:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:03.921 16:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83127' 00:18:03.921 16:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # kill 83127 00:18:03.921 [2024-11-05 16:32:16.840821] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:03.921 16:32:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@976 -- # wait 83127 00:18:04.488 [2024-11-05 16:32:17.294267] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:05.876 ************************************ 00:18:05.876 END TEST raid5f_state_function_test 00:18:05.876 ************************************ 00:18:05.876 16:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:18:05.876 00:18:05.876 real 0m11.846s 00:18:05.876 user 0m18.440s 00:18:05.876 sys 0m2.211s 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.877 16:32:18 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:18:05.877 16:32:18 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:05.877 16:32:18 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:05.877 16:32:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:05.877 ************************************ 00:18:05.877 START TEST raid5f_state_function_test_sb 00:18:05.877 ************************************ 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 4 true 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:05.877 Process raid pid: 83801 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83801 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83801' 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83801 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 83801 ']' 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:05.877 16:32:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.877 [2024-11-05 16:32:18.725265] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:18:05.877 [2024-11-05 16:32:18.725482] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:05.877 [2024-11-05 16:32:18.903720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.153 [2024-11-05 16:32:19.050727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.412 [2024-11-05 16:32:19.272734] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:06.412 [2024-11-05 16:32:19.272884] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:06.672 16:32:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:06.672 16:32:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:18:06.672 16:32:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:06.672 16:32:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.672 16:32:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.672 [2024-11-05 16:32:19.574088] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:06.672 [2024-11-05 16:32:19.574146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:06.672 [2024-11-05 16:32:19.574156] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:06.672 [2024-11-05 16:32:19.574166] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:06.672 [2024-11-05 16:32:19.574176] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:06.672 [2024-11-05 16:32:19.574186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:06.672 [2024-11-05 16:32:19.574192] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:06.672 [2024-11-05 16:32:19.574200] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:06.672 16:32:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.672 16:32:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:06.672 16:32:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:06.672 16:32:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:06.672 16:32:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:06.672 16:32:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:06.672 16:32:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:06.672 16:32:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.672 16:32:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.672 16:32:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.672 16:32:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.672 16:32:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.672 16:32:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.672 16:32:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.672 16:32:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:06.672 16:32:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.672 16:32:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.672 "name": "Existed_Raid", 00:18:06.672 "uuid": "d51440b9-17ea-424b-a16b-51252ca18e16", 00:18:06.672 "strip_size_kb": 64, 00:18:06.672 "state": "configuring", 00:18:06.672 "raid_level": "raid5f", 00:18:06.672 "superblock": true, 00:18:06.672 "num_base_bdevs": 4, 00:18:06.672 "num_base_bdevs_discovered": 0, 00:18:06.672 "num_base_bdevs_operational": 4, 00:18:06.672 "base_bdevs_list": [ 00:18:06.672 { 00:18:06.672 "name": "BaseBdev1", 00:18:06.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.672 "is_configured": false, 00:18:06.672 "data_offset": 0, 00:18:06.672 "data_size": 0 00:18:06.672 }, 00:18:06.672 { 00:18:06.672 "name": "BaseBdev2", 00:18:06.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.672 "is_configured": false, 00:18:06.672 "data_offset": 0, 00:18:06.672 "data_size": 0 00:18:06.672 }, 00:18:06.672 { 00:18:06.672 "name": "BaseBdev3", 00:18:06.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.672 "is_configured": false, 00:18:06.672 "data_offset": 0, 00:18:06.672 "data_size": 0 00:18:06.672 }, 00:18:06.672 { 00:18:06.672 "name": "BaseBdev4", 00:18:06.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.672 "is_configured": false, 00:18:06.672 "data_offset": 0, 00:18:06.672 "data_size": 0 00:18:06.672 } 00:18:06.672 ] 00:18:06.672 }' 00:18:06.672 16:32:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.672 16:32:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.931 16:32:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:07.191 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.191 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.191 [2024-11-05 16:32:20.029301] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:07.192 [2024-11-05 16:32:20.029414] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.192 [2024-11-05 16:32:20.041287] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:07.192 [2024-11-05 16:32:20.041381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:07.192 [2024-11-05 16:32:20.041421] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:07.192 [2024-11-05 16:32:20.041451] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:07.192 [2024-11-05 16:32:20.041474] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:07.192 [2024-11-05 16:32:20.041500] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:07.192 [2024-11-05 16:32:20.041565] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:07.192 [2024-11-05 16:32:20.041650] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.192 [2024-11-05 16:32:20.092930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:07.192 BaseBdev1 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.192 [ 00:18:07.192 { 00:18:07.192 "name": "BaseBdev1", 00:18:07.192 "aliases": [ 00:18:07.192 "f54564ff-efb8-46f7-915e-592a3cdf57a4" 00:18:07.192 ], 00:18:07.192 "product_name": "Malloc disk", 00:18:07.192 "block_size": 512, 00:18:07.192 "num_blocks": 65536, 00:18:07.192 "uuid": "f54564ff-efb8-46f7-915e-592a3cdf57a4", 00:18:07.192 "assigned_rate_limits": { 00:18:07.192 "rw_ios_per_sec": 0, 00:18:07.192 "rw_mbytes_per_sec": 0, 00:18:07.192 "r_mbytes_per_sec": 0, 00:18:07.192 "w_mbytes_per_sec": 0 00:18:07.192 }, 00:18:07.192 "claimed": true, 00:18:07.192 "claim_type": "exclusive_write", 00:18:07.192 "zoned": false, 00:18:07.192 "supported_io_types": { 00:18:07.192 "read": true, 00:18:07.192 "write": true, 00:18:07.192 "unmap": true, 00:18:07.192 "flush": true, 00:18:07.192 "reset": true, 00:18:07.192 "nvme_admin": false, 00:18:07.192 "nvme_io": false, 00:18:07.192 "nvme_io_md": false, 00:18:07.192 "write_zeroes": true, 00:18:07.192 "zcopy": true, 00:18:07.192 "get_zone_info": false, 00:18:07.192 "zone_management": false, 00:18:07.192 "zone_append": false, 00:18:07.192 "compare": false, 00:18:07.192 "compare_and_write": false, 00:18:07.192 "abort": true, 00:18:07.192 "seek_hole": false, 00:18:07.192 "seek_data": false, 00:18:07.192 "copy": true, 00:18:07.192 "nvme_iov_md": false 00:18:07.192 }, 00:18:07.192 "memory_domains": [ 00:18:07.192 { 00:18:07.192 "dma_device_id": "system", 00:18:07.192 "dma_device_type": 1 00:18:07.192 }, 00:18:07.192 { 00:18:07.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.192 "dma_device_type": 2 00:18:07.192 } 00:18:07.192 ], 00:18:07.192 "driver_specific": {} 00:18:07.192 } 00:18:07.192 ] 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.192 "name": "Existed_Raid", 00:18:07.192 "uuid": "79e69509-8ba3-43a5-adc6-d18e8f034f44", 00:18:07.192 "strip_size_kb": 64, 00:18:07.192 "state": "configuring", 00:18:07.192 "raid_level": "raid5f", 00:18:07.192 "superblock": true, 00:18:07.192 "num_base_bdevs": 4, 00:18:07.192 "num_base_bdevs_discovered": 1, 00:18:07.192 "num_base_bdevs_operational": 4, 00:18:07.192 "base_bdevs_list": [ 00:18:07.192 { 00:18:07.192 "name": "BaseBdev1", 00:18:07.192 "uuid": "f54564ff-efb8-46f7-915e-592a3cdf57a4", 00:18:07.192 "is_configured": true, 00:18:07.192 "data_offset": 2048, 00:18:07.192 "data_size": 63488 00:18:07.192 }, 00:18:07.192 { 00:18:07.192 "name": "BaseBdev2", 00:18:07.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.192 "is_configured": false, 00:18:07.192 "data_offset": 0, 00:18:07.192 "data_size": 0 00:18:07.192 }, 00:18:07.192 { 00:18:07.192 "name": "BaseBdev3", 00:18:07.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.192 "is_configured": false, 00:18:07.192 "data_offset": 0, 00:18:07.192 "data_size": 0 00:18:07.192 }, 00:18:07.192 { 00:18:07.192 "name": "BaseBdev4", 00:18:07.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.192 "is_configured": false, 00:18:07.192 "data_offset": 0, 00:18:07.192 "data_size": 0 00:18:07.192 } 00:18:07.192 ] 00:18:07.192 }' 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.192 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.452 16:32:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:07.452 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.452 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.452 [2024-11-05 16:32:20.508356] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:07.452 [2024-11-05 16:32:20.508415] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:07.452 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.452 16:32:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:07.452 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.452 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.452 [2024-11-05 16:32:20.520399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:07.452 [2024-11-05 16:32:20.522532] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:07.452 [2024-11-05 16:32:20.522576] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:07.452 [2024-11-05 16:32:20.522587] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:07.452 [2024-11-05 16:32:20.522599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:07.452 [2024-11-05 16:32:20.522606] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:07.452 [2024-11-05 16:32:20.522616] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:07.452 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.452 16:32:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:07.452 16:32:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:07.452 16:32:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:07.452 16:32:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:07.452 16:32:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:07.452 16:32:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:07.452 16:32:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:07.452 16:32:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:07.452 16:32:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.452 16:32:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.452 16:32:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.452 16:32:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.452 16:32:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.452 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.452 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.452 16:32:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:07.711 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.711 16:32:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.711 "name": "Existed_Raid", 00:18:07.711 "uuid": "a526b228-8b85-4ec4-a258-a041e1f68880", 00:18:07.711 "strip_size_kb": 64, 00:18:07.711 "state": "configuring", 00:18:07.711 "raid_level": "raid5f", 00:18:07.711 "superblock": true, 00:18:07.711 "num_base_bdevs": 4, 00:18:07.711 "num_base_bdevs_discovered": 1, 00:18:07.711 "num_base_bdevs_operational": 4, 00:18:07.711 "base_bdevs_list": [ 00:18:07.711 { 00:18:07.711 "name": "BaseBdev1", 00:18:07.711 "uuid": "f54564ff-efb8-46f7-915e-592a3cdf57a4", 00:18:07.711 "is_configured": true, 00:18:07.711 "data_offset": 2048, 00:18:07.711 "data_size": 63488 00:18:07.711 }, 00:18:07.711 { 00:18:07.711 "name": "BaseBdev2", 00:18:07.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.711 "is_configured": false, 00:18:07.711 "data_offset": 0, 00:18:07.711 "data_size": 0 00:18:07.711 }, 00:18:07.711 { 00:18:07.711 "name": "BaseBdev3", 00:18:07.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.711 "is_configured": false, 00:18:07.711 "data_offset": 0, 00:18:07.711 "data_size": 0 00:18:07.711 }, 00:18:07.711 { 00:18:07.711 "name": "BaseBdev4", 00:18:07.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.711 "is_configured": false, 00:18:07.711 "data_offset": 0, 00:18:07.711 "data_size": 0 00:18:07.711 } 00:18:07.711 ] 00:18:07.711 }' 00:18:07.711 16:32:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.711 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.969 16:32:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:07.969 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.969 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.969 BaseBdev2 00:18:07.969 [2024-11-05 16:32:20.982101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:07.969 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.969 16:32:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:07.969 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:07.969 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:07.969 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:07.969 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:07.969 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:07.969 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:07.969 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.969 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.969 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.969 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:07.969 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.969 16:32:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.969 [ 00:18:07.969 { 00:18:07.969 "name": "BaseBdev2", 00:18:07.969 "aliases": [ 00:18:07.969 "da130d19-a571-4f7e-8a3f-612148ff702f" 00:18:07.969 ], 00:18:07.969 "product_name": "Malloc disk", 00:18:07.969 "block_size": 512, 00:18:07.969 "num_blocks": 65536, 00:18:07.969 "uuid": "da130d19-a571-4f7e-8a3f-612148ff702f", 00:18:07.969 "assigned_rate_limits": { 00:18:07.969 "rw_ios_per_sec": 0, 00:18:07.969 "rw_mbytes_per_sec": 0, 00:18:07.969 "r_mbytes_per_sec": 0, 00:18:07.969 "w_mbytes_per_sec": 0 00:18:07.969 }, 00:18:07.969 "claimed": true, 00:18:07.969 "claim_type": "exclusive_write", 00:18:07.969 "zoned": false, 00:18:07.969 "supported_io_types": { 00:18:07.969 "read": true, 00:18:07.969 "write": true, 00:18:07.969 "unmap": true, 00:18:07.969 "flush": true, 00:18:07.969 "reset": true, 00:18:07.969 "nvme_admin": false, 00:18:07.969 "nvme_io": false, 00:18:07.969 "nvme_io_md": false, 00:18:07.969 "write_zeroes": true, 00:18:07.969 "zcopy": true, 00:18:07.969 "get_zone_info": false, 00:18:07.969 "zone_management": false, 00:18:07.969 "zone_append": false, 00:18:07.969 "compare": false, 00:18:07.969 "compare_and_write": false, 00:18:07.969 "abort": true, 00:18:07.969 "seek_hole": false, 00:18:07.969 "seek_data": false, 00:18:07.969 "copy": true, 00:18:07.969 "nvme_iov_md": false 00:18:07.969 }, 00:18:07.969 "memory_domains": [ 00:18:07.969 { 00:18:07.969 "dma_device_id": "system", 00:18:07.969 "dma_device_type": 1 00:18:07.969 }, 00:18:07.969 { 00:18:07.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.969 "dma_device_type": 2 00:18:07.969 } 00:18:07.969 ], 00:18:07.969 "driver_specific": {} 00:18:07.969 } 00:18:07.969 ] 00:18:07.969 16:32:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.969 16:32:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:07.969 16:32:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:07.969 16:32:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:07.969 16:32:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:07.969 16:32:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:07.969 16:32:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:07.969 16:32:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:07.969 16:32:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:07.969 16:32:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:07.969 16:32:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.969 16:32:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.969 16:32:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.969 16:32:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.969 16:32:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.969 16:32:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:07.969 16:32:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.969 16:32:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.969 16:32:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.228 16:32:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.228 "name": "Existed_Raid", 00:18:08.228 "uuid": "a526b228-8b85-4ec4-a258-a041e1f68880", 00:18:08.228 "strip_size_kb": 64, 00:18:08.228 "state": "configuring", 00:18:08.228 "raid_level": "raid5f", 00:18:08.228 "superblock": true, 00:18:08.228 "num_base_bdevs": 4, 00:18:08.228 "num_base_bdevs_discovered": 2, 00:18:08.228 "num_base_bdevs_operational": 4, 00:18:08.228 "base_bdevs_list": [ 00:18:08.228 { 00:18:08.228 "name": "BaseBdev1", 00:18:08.228 "uuid": "f54564ff-efb8-46f7-915e-592a3cdf57a4", 00:18:08.228 "is_configured": true, 00:18:08.228 "data_offset": 2048, 00:18:08.228 "data_size": 63488 00:18:08.228 }, 00:18:08.228 { 00:18:08.228 "name": "BaseBdev2", 00:18:08.228 "uuid": "da130d19-a571-4f7e-8a3f-612148ff702f", 00:18:08.228 "is_configured": true, 00:18:08.228 "data_offset": 2048, 00:18:08.228 "data_size": 63488 00:18:08.228 }, 00:18:08.228 { 00:18:08.228 "name": "BaseBdev3", 00:18:08.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.228 "is_configured": false, 00:18:08.228 "data_offset": 0, 00:18:08.228 "data_size": 0 00:18:08.228 }, 00:18:08.228 { 00:18:08.228 "name": "BaseBdev4", 00:18:08.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.228 "is_configured": false, 00:18:08.228 "data_offset": 0, 00:18:08.228 "data_size": 0 00:18:08.228 } 00:18:08.228 ] 00:18:08.228 }' 00:18:08.228 16:32:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.228 16:32:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.487 16:32:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:08.487 16:32:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.487 16:32:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.487 [2024-11-05 16:32:21.507853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:08.487 BaseBdev3 00:18:08.487 16:32:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.487 16:32:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:08.487 16:32:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:18:08.487 16:32:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:08.487 16:32:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:08.487 16:32:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:08.487 16:32:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:08.487 16:32:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:08.487 16:32:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.487 16:32:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.487 16:32:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.487 16:32:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:08.487 16:32:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.487 16:32:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.487 [ 00:18:08.487 { 00:18:08.487 "name": "BaseBdev3", 00:18:08.487 "aliases": [ 00:18:08.487 "eca7b678-9790-4eec-9041-631e1af923f1" 00:18:08.487 ], 00:18:08.487 "product_name": "Malloc disk", 00:18:08.487 "block_size": 512, 00:18:08.487 "num_blocks": 65536, 00:18:08.487 "uuid": "eca7b678-9790-4eec-9041-631e1af923f1", 00:18:08.487 "assigned_rate_limits": { 00:18:08.487 "rw_ios_per_sec": 0, 00:18:08.487 "rw_mbytes_per_sec": 0, 00:18:08.487 "r_mbytes_per_sec": 0, 00:18:08.487 "w_mbytes_per_sec": 0 00:18:08.487 }, 00:18:08.487 "claimed": true, 00:18:08.487 "claim_type": "exclusive_write", 00:18:08.487 "zoned": false, 00:18:08.487 "supported_io_types": { 00:18:08.487 "read": true, 00:18:08.487 "write": true, 00:18:08.487 "unmap": true, 00:18:08.487 "flush": true, 00:18:08.487 "reset": true, 00:18:08.487 "nvme_admin": false, 00:18:08.487 "nvme_io": false, 00:18:08.487 "nvme_io_md": false, 00:18:08.487 "write_zeroes": true, 00:18:08.487 "zcopy": true, 00:18:08.487 "get_zone_info": false, 00:18:08.487 "zone_management": false, 00:18:08.487 "zone_append": false, 00:18:08.487 "compare": false, 00:18:08.487 "compare_and_write": false, 00:18:08.487 "abort": true, 00:18:08.487 "seek_hole": false, 00:18:08.487 "seek_data": false, 00:18:08.487 "copy": true, 00:18:08.487 "nvme_iov_md": false 00:18:08.487 }, 00:18:08.487 "memory_domains": [ 00:18:08.487 { 00:18:08.487 "dma_device_id": "system", 00:18:08.487 "dma_device_type": 1 00:18:08.487 }, 00:18:08.487 { 00:18:08.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.488 "dma_device_type": 2 00:18:08.488 } 00:18:08.488 ], 00:18:08.488 "driver_specific": {} 00:18:08.488 } 00:18:08.488 ] 00:18:08.488 16:32:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.488 16:32:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:08.488 16:32:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:08.488 16:32:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:08.488 16:32:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:08.488 16:32:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:08.488 16:32:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:08.488 16:32:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:08.488 16:32:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:08.488 16:32:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:08.488 16:32:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.488 16:32:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.488 16:32:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.488 16:32:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.488 16:32:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.488 16:32:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:08.488 16:32:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.488 16:32:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.488 16:32:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.746 16:32:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.746 "name": "Existed_Raid", 00:18:08.746 "uuid": "a526b228-8b85-4ec4-a258-a041e1f68880", 00:18:08.746 "strip_size_kb": 64, 00:18:08.746 "state": "configuring", 00:18:08.746 "raid_level": "raid5f", 00:18:08.746 "superblock": true, 00:18:08.746 "num_base_bdevs": 4, 00:18:08.746 "num_base_bdevs_discovered": 3, 00:18:08.746 "num_base_bdevs_operational": 4, 00:18:08.746 "base_bdevs_list": [ 00:18:08.746 { 00:18:08.746 "name": "BaseBdev1", 00:18:08.746 "uuid": "f54564ff-efb8-46f7-915e-592a3cdf57a4", 00:18:08.746 "is_configured": true, 00:18:08.746 "data_offset": 2048, 00:18:08.746 "data_size": 63488 00:18:08.746 }, 00:18:08.746 { 00:18:08.746 "name": "BaseBdev2", 00:18:08.746 "uuid": "da130d19-a571-4f7e-8a3f-612148ff702f", 00:18:08.746 "is_configured": true, 00:18:08.746 "data_offset": 2048, 00:18:08.746 "data_size": 63488 00:18:08.746 }, 00:18:08.746 { 00:18:08.746 "name": "BaseBdev3", 00:18:08.746 "uuid": "eca7b678-9790-4eec-9041-631e1af923f1", 00:18:08.746 "is_configured": true, 00:18:08.746 "data_offset": 2048, 00:18:08.746 "data_size": 63488 00:18:08.746 }, 00:18:08.746 { 00:18:08.746 "name": "BaseBdev4", 00:18:08.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.746 "is_configured": false, 00:18:08.746 "data_offset": 0, 00:18:08.746 "data_size": 0 00:18:08.746 } 00:18:08.746 ] 00:18:08.746 }' 00:18:08.746 16:32:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.746 16:32:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.005 16:32:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:09.005 16:32:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.005 16:32:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.005 [2024-11-05 16:32:22.029943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:09.005 [2024-11-05 16:32:22.030375] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:09.005 [2024-11-05 16:32:22.030435] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:09.005 [2024-11-05 16:32:22.030775] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:09.005 BaseBdev4 00:18:09.005 16:32:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.005 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:18:09.005 16:32:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:18:09.005 16:32:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:09.005 16:32:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:09.005 16:32:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:09.005 16:32:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:09.005 16:32:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:09.005 16:32:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.005 16:32:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.005 [2024-11-05 16:32:22.039482] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:09.005 [2024-11-05 16:32:22.039507] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:09.005 [2024-11-05 16:32:22.039831] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.005 16:32:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.005 16:32:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:09.005 16:32:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.005 16:32:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.005 [ 00:18:09.005 { 00:18:09.005 "name": "BaseBdev4", 00:18:09.005 "aliases": [ 00:18:09.005 "43826d2d-6ee3-4708-ac19-75c8f54b476b" 00:18:09.005 ], 00:18:09.005 "product_name": "Malloc disk", 00:18:09.005 "block_size": 512, 00:18:09.005 "num_blocks": 65536, 00:18:09.005 "uuid": "43826d2d-6ee3-4708-ac19-75c8f54b476b", 00:18:09.005 "assigned_rate_limits": { 00:18:09.005 "rw_ios_per_sec": 0, 00:18:09.005 "rw_mbytes_per_sec": 0, 00:18:09.005 "r_mbytes_per_sec": 0, 00:18:09.005 "w_mbytes_per_sec": 0 00:18:09.005 }, 00:18:09.005 "claimed": true, 00:18:09.005 "claim_type": "exclusive_write", 00:18:09.005 "zoned": false, 00:18:09.005 "supported_io_types": { 00:18:09.005 "read": true, 00:18:09.005 "write": true, 00:18:09.005 "unmap": true, 00:18:09.005 "flush": true, 00:18:09.005 "reset": true, 00:18:09.005 "nvme_admin": false, 00:18:09.005 "nvme_io": false, 00:18:09.005 "nvme_io_md": false, 00:18:09.005 "write_zeroes": true, 00:18:09.005 "zcopy": true, 00:18:09.005 "get_zone_info": false, 00:18:09.005 "zone_management": false, 00:18:09.005 "zone_append": false, 00:18:09.005 "compare": false, 00:18:09.005 "compare_and_write": false, 00:18:09.005 "abort": true, 00:18:09.005 "seek_hole": false, 00:18:09.005 "seek_data": false, 00:18:09.005 "copy": true, 00:18:09.005 "nvme_iov_md": false 00:18:09.005 }, 00:18:09.005 "memory_domains": [ 00:18:09.005 { 00:18:09.005 "dma_device_id": "system", 00:18:09.005 "dma_device_type": 1 00:18:09.005 }, 00:18:09.005 { 00:18:09.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:09.005 "dma_device_type": 2 00:18:09.005 } 00:18:09.005 ], 00:18:09.005 "driver_specific": {} 00:18:09.005 } 00:18:09.005 ] 00:18:09.005 16:32:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.005 16:32:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:09.005 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:09.005 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:09.005 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:18:09.005 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:09.005 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.005 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:09.005 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:09.005 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:09.005 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.005 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.005 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.005 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.005 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.005 16:32:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.005 16:32:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.005 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.005 16:32:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.264 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.264 "name": "Existed_Raid", 00:18:09.264 "uuid": "a526b228-8b85-4ec4-a258-a041e1f68880", 00:18:09.264 "strip_size_kb": 64, 00:18:09.264 "state": "online", 00:18:09.264 "raid_level": "raid5f", 00:18:09.264 "superblock": true, 00:18:09.264 "num_base_bdevs": 4, 00:18:09.264 "num_base_bdevs_discovered": 4, 00:18:09.264 "num_base_bdevs_operational": 4, 00:18:09.264 "base_bdevs_list": [ 00:18:09.264 { 00:18:09.264 "name": "BaseBdev1", 00:18:09.264 "uuid": "f54564ff-efb8-46f7-915e-592a3cdf57a4", 00:18:09.264 "is_configured": true, 00:18:09.264 "data_offset": 2048, 00:18:09.264 "data_size": 63488 00:18:09.264 }, 00:18:09.264 { 00:18:09.264 "name": "BaseBdev2", 00:18:09.264 "uuid": "da130d19-a571-4f7e-8a3f-612148ff702f", 00:18:09.264 "is_configured": true, 00:18:09.264 "data_offset": 2048, 00:18:09.264 "data_size": 63488 00:18:09.264 }, 00:18:09.264 { 00:18:09.264 "name": "BaseBdev3", 00:18:09.264 "uuid": "eca7b678-9790-4eec-9041-631e1af923f1", 00:18:09.264 "is_configured": true, 00:18:09.264 "data_offset": 2048, 00:18:09.264 "data_size": 63488 00:18:09.264 }, 00:18:09.264 { 00:18:09.264 "name": "BaseBdev4", 00:18:09.264 "uuid": "43826d2d-6ee3-4708-ac19-75c8f54b476b", 00:18:09.264 "is_configured": true, 00:18:09.264 "data_offset": 2048, 00:18:09.264 "data_size": 63488 00:18:09.264 } 00:18:09.264 ] 00:18:09.264 }' 00:18:09.264 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.264 16:32:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.524 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:09.524 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:09.524 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:09.524 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:09.524 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:09.524 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:09.524 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:09.524 16:32:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.524 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:09.524 16:32:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.524 [2024-11-05 16:32:22.472676] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:09.524 16:32:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.524 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:09.524 "name": "Existed_Raid", 00:18:09.524 "aliases": [ 00:18:09.524 "a526b228-8b85-4ec4-a258-a041e1f68880" 00:18:09.524 ], 00:18:09.524 "product_name": "Raid Volume", 00:18:09.524 "block_size": 512, 00:18:09.524 "num_blocks": 190464, 00:18:09.524 "uuid": "a526b228-8b85-4ec4-a258-a041e1f68880", 00:18:09.524 "assigned_rate_limits": { 00:18:09.524 "rw_ios_per_sec": 0, 00:18:09.524 "rw_mbytes_per_sec": 0, 00:18:09.524 "r_mbytes_per_sec": 0, 00:18:09.524 "w_mbytes_per_sec": 0 00:18:09.524 }, 00:18:09.524 "claimed": false, 00:18:09.524 "zoned": false, 00:18:09.524 "supported_io_types": { 00:18:09.524 "read": true, 00:18:09.524 "write": true, 00:18:09.524 "unmap": false, 00:18:09.524 "flush": false, 00:18:09.524 "reset": true, 00:18:09.524 "nvme_admin": false, 00:18:09.524 "nvme_io": false, 00:18:09.524 "nvme_io_md": false, 00:18:09.524 "write_zeroes": true, 00:18:09.524 "zcopy": false, 00:18:09.524 "get_zone_info": false, 00:18:09.524 "zone_management": false, 00:18:09.524 "zone_append": false, 00:18:09.524 "compare": false, 00:18:09.524 "compare_and_write": false, 00:18:09.524 "abort": false, 00:18:09.524 "seek_hole": false, 00:18:09.524 "seek_data": false, 00:18:09.524 "copy": false, 00:18:09.524 "nvme_iov_md": false 00:18:09.524 }, 00:18:09.524 "driver_specific": { 00:18:09.524 "raid": { 00:18:09.524 "uuid": "a526b228-8b85-4ec4-a258-a041e1f68880", 00:18:09.524 "strip_size_kb": 64, 00:18:09.524 "state": "online", 00:18:09.524 "raid_level": "raid5f", 00:18:09.524 "superblock": true, 00:18:09.524 "num_base_bdevs": 4, 00:18:09.524 "num_base_bdevs_discovered": 4, 00:18:09.524 "num_base_bdevs_operational": 4, 00:18:09.524 "base_bdevs_list": [ 00:18:09.524 { 00:18:09.524 "name": "BaseBdev1", 00:18:09.524 "uuid": "f54564ff-efb8-46f7-915e-592a3cdf57a4", 00:18:09.524 "is_configured": true, 00:18:09.524 "data_offset": 2048, 00:18:09.524 "data_size": 63488 00:18:09.524 }, 00:18:09.524 { 00:18:09.524 "name": "BaseBdev2", 00:18:09.524 "uuid": "da130d19-a571-4f7e-8a3f-612148ff702f", 00:18:09.524 "is_configured": true, 00:18:09.524 "data_offset": 2048, 00:18:09.524 "data_size": 63488 00:18:09.524 }, 00:18:09.524 { 00:18:09.524 "name": "BaseBdev3", 00:18:09.524 "uuid": "eca7b678-9790-4eec-9041-631e1af923f1", 00:18:09.524 "is_configured": true, 00:18:09.524 "data_offset": 2048, 00:18:09.524 "data_size": 63488 00:18:09.524 }, 00:18:09.524 { 00:18:09.524 "name": "BaseBdev4", 00:18:09.524 "uuid": "43826d2d-6ee3-4708-ac19-75c8f54b476b", 00:18:09.524 "is_configured": true, 00:18:09.524 "data_offset": 2048, 00:18:09.524 "data_size": 63488 00:18:09.524 } 00:18:09.524 ] 00:18:09.524 } 00:18:09.524 } 00:18:09.524 }' 00:18:09.524 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:09.524 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:09.524 BaseBdev2 00:18:09.524 BaseBdev3 00:18:09.524 BaseBdev4' 00:18:09.524 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:09.524 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:09.524 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:09.524 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:09.524 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:09.524 16:32:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.524 16:32:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.524 16:32:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.784 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:09.784 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:09.784 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:09.784 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:09.784 16:32:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.784 16:32:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.784 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:09.784 16:32:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.784 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:09.784 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:09.784 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:09.784 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:09.784 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:09.784 16:32:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.784 16:32:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.784 16:32:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.784 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:09.784 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:09.784 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:09.784 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:09.784 16:32:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.784 16:32:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.784 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:09.784 16:32:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.784 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:09.784 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:09.784 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:09.784 16:32:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.785 16:32:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.785 [2024-11-05 16:32:22.771987] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:10.043 16:32:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.043 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:10.043 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:18:10.043 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:10.043 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:18:10.043 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:10.043 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:10.043 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:10.043 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.043 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:10.043 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:10.043 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:10.043 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.043 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.043 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.043 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.043 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.043 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:10.043 16:32:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.043 16:32:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.043 16:32:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.043 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.043 "name": "Existed_Raid", 00:18:10.043 "uuid": "a526b228-8b85-4ec4-a258-a041e1f68880", 00:18:10.043 "strip_size_kb": 64, 00:18:10.043 "state": "online", 00:18:10.043 "raid_level": "raid5f", 00:18:10.044 "superblock": true, 00:18:10.044 "num_base_bdevs": 4, 00:18:10.044 "num_base_bdevs_discovered": 3, 00:18:10.044 "num_base_bdevs_operational": 3, 00:18:10.044 "base_bdevs_list": [ 00:18:10.044 { 00:18:10.044 "name": null, 00:18:10.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.044 "is_configured": false, 00:18:10.044 "data_offset": 0, 00:18:10.044 "data_size": 63488 00:18:10.044 }, 00:18:10.044 { 00:18:10.044 "name": "BaseBdev2", 00:18:10.044 "uuid": "da130d19-a571-4f7e-8a3f-612148ff702f", 00:18:10.044 "is_configured": true, 00:18:10.044 "data_offset": 2048, 00:18:10.044 "data_size": 63488 00:18:10.044 }, 00:18:10.044 { 00:18:10.044 "name": "BaseBdev3", 00:18:10.044 "uuid": "eca7b678-9790-4eec-9041-631e1af923f1", 00:18:10.044 "is_configured": true, 00:18:10.044 "data_offset": 2048, 00:18:10.044 "data_size": 63488 00:18:10.044 }, 00:18:10.044 { 00:18:10.044 "name": "BaseBdev4", 00:18:10.044 "uuid": "43826d2d-6ee3-4708-ac19-75c8f54b476b", 00:18:10.044 "is_configured": true, 00:18:10.044 "data_offset": 2048, 00:18:10.044 "data_size": 63488 00:18:10.044 } 00:18:10.044 ] 00:18:10.044 }' 00:18:10.044 16:32:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.044 16:32:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.302 16:32:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:10.302 16:32:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:10.302 16:32:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:10.302 16:32:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.302 16:32:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.302 16:32:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.302 16:32:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.302 16:32:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:10.302 16:32:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:10.302 16:32:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:10.302 16:32:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.302 16:32:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.302 [2024-11-05 16:32:23.380278] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:10.302 [2024-11-05 16:32:23.380580] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:10.561 [2024-11-05 16:32:23.493481] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:10.561 16:32:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.561 16:32:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:10.561 16:32:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:10.561 16:32:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.561 16:32:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.561 16:32:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.561 16:32:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:10.561 16:32:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.561 16:32:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:10.561 16:32:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:10.561 16:32:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:10.561 16:32:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.561 16:32:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.561 [2024-11-05 16:32:23.545461] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:10.820 16:32:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.820 16:32:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:10.820 16:32:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:10.820 16:32:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.820 16:32:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:10.820 16:32:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.820 16:32:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.820 16:32:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.820 16:32:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:10.820 16:32:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:10.820 16:32:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:18:10.820 16:32:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.820 16:32:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.820 [2024-11-05 16:32:23.707816] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:10.820 [2024-11-05 16:32:23.707874] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:10.820 16:32:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.820 16:32:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:10.820 16:32:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:10.820 16:32:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.820 16:32:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.820 16:32:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.820 16:32:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:10.820 16:32:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.820 16:32:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:10.820 16:32:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:10.820 16:32:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:18:10.820 16:32:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:10.820 16:32:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:10.820 16:32:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:10.820 16:32:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.820 16:32:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.079 BaseBdev2 00:18:11.079 16:32:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.079 16:32:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:11.080 16:32:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:11.080 16:32:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:11.080 16:32:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:11.080 16:32:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:11.080 16:32:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:11.080 16:32:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:11.080 16:32:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.080 16:32:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.080 16:32:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.080 16:32:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:11.080 16:32:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.080 16:32:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.080 [ 00:18:11.080 { 00:18:11.080 "name": "BaseBdev2", 00:18:11.080 "aliases": [ 00:18:11.080 "60e3ca41-c0c4-4f9e-a86a-f5f786553763" 00:18:11.080 ], 00:18:11.080 "product_name": "Malloc disk", 00:18:11.080 "block_size": 512, 00:18:11.080 "num_blocks": 65536, 00:18:11.080 "uuid": "60e3ca41-c0c4-4f9e-a86a-f5f786553763", 00:18:11.080 "assigned_rate_limits": { 00:18:11.080 "rw_ios_per_sec": 0, 00:18:11.080 "rw_mbytes_per_sec": 0, 00:18:11.080 "r_mbytes_per_sec": 0, 00:18:11.080 "w_mbytes_per_sec": 0 00:18:11.080 }, 00:18:11.080 "claimed": false, 00:18:11.080 "zoned": false, 00:18:11.080 "supported_io_types": { 00:18:11.080 "read": true, 00:18:11.080 "write": true, 00:18:11.080 "unmap": true, 00:18:11.080 "flush": true, 00:18:11.080 "reset": true, 00:18:11.080 "nvme_admin": false, 00:18:11.080 "nvme_io": false, 00:18:11.080 "nvme_io_md": false, 00:18:11.080 "write_zeroes": true, 00:18:11.080 "zcopy": true, 00:18:11.080 "get_zone_info": false, 00:18:11.080 "zone_management": false, 00:18:11.080 "zone_append": false, 00:18:11.080 "compare": false, 00:18:11.080 "compare_and_write": false, 00:18:11.080 "abort": true, 00:18:11.080 "seek_hole": false, 00:18:11.080 "seek_data": false, 00:18:11.080 "copy": true, 00:18:11.080 "nvme_iov_md": false 00:18:11.080 }, 00:18:11.080 "memory_domains": [ 00:18:11.080 { 00:18:11.080 "dma_device_id": "system", 00:18:11.080 "dma_device_type": 1 00:18:11.080 }, 00:18:11.080 { 00:18:11.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:11.080 "dma_device_type": 2 00:18:11.080 } 00:18:11.080 ], 00:18:11.080 "driver_specific": {} 00:18:11.080 } 00:18:11.080 ] 00:18:11.080 16:32:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.080 16:32:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:11.080 16:32:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:11.080 16:32:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:11.080 16:32:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:11.080 16:32:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.080 16:32:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.080 BaseBdev3 00:18:11.080 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.080 16:32:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:11.080 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:18:11.080 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:11.080 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:11.080 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:11.080 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:11.080 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:11.080 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.080 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.080 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.080 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:11.080 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.080 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.080 [ 00:18:11.080 { 00:18:11.080 "name": "BaseBdev3", 00:18:11.080 "aliases": [ 00:18:11.080 "417e12c0-d33c-4ec5-9a09-fe62f020f641" 00:18:11.080 ], 00:18:11.080 "product_name": "Malloc disk", 00:18:11.080 "block_size": 512, 00:18:11.080 "num_blocks": 65536, 00:18:11.080 "uuid": "417e12c0-d33c-4ec5-9a09-fe62f020f641", 00:18:11.080 "assigned_rate_limits": { 00:18:11.080 "rw_ios_per_sec": 0, 00:18:11.080 "rw_mbytes_per_sec": 0, 00:18:11.080 "r_mbytes_per_sec": 0, 00:18:11.080 "w_mbytes_per_sec": 0 00:18:11.080 }, 00:18:11.080 "claimed": false, 00:18:11.080 "zoned": false, 00:18:11.080 "supported_io_types": { 00:18:11.080 "read": true, 00:18:11.080 "write": true, 00:18:11.080 "unmap": true, 00:18:11.080 "flush": true, 00:18:11.080 "reset": true, 00:18:11.080 "nvme_admin": false, 00:18:11.080 "nvme_io": false, 00:18:11.080 "nvme_io_md": false, 00:18:11.080 "write_zeroes": true, 00:18:11.080 "zcopy": true, 00:18:11.080 "get_zone_info": false, 00:18:11.080 "zone_management": false, 00:18:11.080 "zone_append": false, 00:18:11.080 "compare": false, 00:18:11.080 "compare_and_write": false, 00:18:11.080 "abort": true, 00:18:11.080 "seek_hole": false, 00:18:11.080 "seek_data": false, 00:18:11.080 "copy": true, 00:18:11.080 "nvme_iov_md": false 00:18:11.080 }, 00:18:11.080 "memory_domains": [ 00:18:11.080 { 00:18:11.080 "dma_device_id": "system", 00:18:11.080 "dma_device_type": 1 00:18:11.080 }, 00:18:11.080 { 00:18:11.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:11.080 "dma_device_type": 2 00:18:11.080 } 00:18:11.080 ], 00:18:11.080 "driver_specific": {} 00:18:11.080 } 00:18:11.080 ] 00:18:11.080 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.080 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:11.080 16:32:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:11.080 16:32:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:11.080 16:32:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:11.080 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.080 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.080 BaseBdev4 00:18:11.080 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.080 16:32:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:18:11.080 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:18:11.080 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:11.080 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:11.080 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:11.080 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:11.080 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:11.080 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.080 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.080 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.080 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:11.080 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.080 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.080 [ 00:18:11.080 { 00:18:11.080 "name": "BaseBdev4", 00:18:11.080 "aliases": [ 00:18:11.080 "45103c24-21dd-4109-8f8f-4cdb18e62e51" 00:18:11.080 ], 00:18:11.080 "product_name": "Malloc disk", 00:18:11.080 "block_size": 512, 00:18:11.080 "num_blocks": 65536, 00:18:11.080 "uuid": "45103c24-21dd-4109-8f8f-4cdb18e62e51", 00:18:11.080 "assigned_rate_limits": { 00:18:11.080 "rw_ios_per_sec": 0, 00:18:11.080 "rw_mbytes_per_sec": 0, 00:18:11.080 "r_mbytes_per_sec": 0, 00:18:11.080 "w_mbytes_per_sec": 0 00:18:11.080 }, 00:18:11.080 "claimed": false, 00:18:11.080 "zoned": false, 00:18:11.080 "supported_io_types": { 00:18:11.080 "read": true, 00:18:11.080 "write": true, 00:18:11.080 "unmap": true, 00:18:11.080 "flush": true, 00:18:11.080 "reset": true, 00:18:11.080 "nvme_admin": false, 00:18:11.080 "nvme_io": false, 00:18:11.080 "nvme_io_md": false, 00:18:11.080 "write_zeroes": true, 00:18:11.080 "zcopy": true, 00:18:11.080 "get_zone_info": false, 00:18:11.080 "zone_management": false, 00:18:11.080 "zone_append": false, 00:18:11.080 "compare": false, 00:18:11.080 "compare_and_write": false, 00:18:11.080 "abort": true, 00:18:11.080 "seek_hole": false, 00:18:11.080 "seek_data": false, 00:18:11.080 "copy": true, 00:18:11.080 "nvme_iov_md": false 00:18:11.080 }, 00:18:11.081 "memory_domains": [ 00:18:11.081 { 00:18:11.081 "dma_device_id": "system", 00:18:11.081 "dma_device_type": 1 00:18:11.081 }, 00:18:11.081 { 00:18:11.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:11.081 "dma_device_type": 2 00:18:11.081 } 00:18:11.081 ], 00:18:11.081 "driver_specific": {} 00:18:11.081 } 00:18:11.081 ] 00:18:11.081 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.081 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:11.081 16:32:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:11.081 16:32:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:11.081 16:32:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:11.081 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.081 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.081 [2024-11-05 16:32:24.143965] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:11.081 [2024-11-05 16:32:24.144064] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:11.081 [2024-11-05 16:32:24.144120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:11.081 [2024-11-05 16:32:24.146174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:11.081 [2024-11-05 16:32:24.146284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:11.081 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.081 16:32:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:11.081 16:32:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:11.081 16:32:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:11.081 16:32:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:11.081 16:32:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:11.081 16:32:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:11.081 16:32:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.081 16:32:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.081 16:32:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.081 16:32:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.081 16:32:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:11.081 16:32:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.081 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.081 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.339 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.339 16:32:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.339 "name": "Existed_Raid", 00:18:11.339 "uuid": "b147a6b0-e9aa-457f-9c69-03bee9bccfe7", 00:18:11.339 "strip_size_kb": 64, 00:18:11.339 "state": "configuring", 00:18:11.339 "raid_level": "raid5f", 00:18:11.339 "superblock": true, 00:18:11.339 "num_base_bdevs": 4, 00:18:11.339 "num_base_bdevs_discovered": 3, 00:18:11.339 "num_base_bdevs_operational": 4, 00:18:11.339 "base_bdevs_list": [ 00:18:11.339 { 00:18:11.339 "name": "BaseBdev1", 00:18:11.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.339 "is_configured": false, 00:18:11.339 "data_offset": 0, 00:18:11.339 "data_size": 0 00:18:11.339 }, 00:18:11.339 { 00:18:11.339 "name": "BaseBdev2", 00:18:11.339 "uuid": "60e3ca41-c0c4-4f9e-a86a-f5f786553763", 00:18:11.339 "is_configured": true, 00:18:11.339 "data_offset": 2048, 00:18:11.339 "data_size": 63488 00:18:11.339 }, 00:18:11.339 { 00:18:11.339 "name": "BaseBdev3", 00:18:11.339 "uuid": "417e12c0-d33c-4ec5-9a09-fe62f020f641", 00:18:11.339 "is_configured": true, 00:18:11.339 "data_offset": 2048, 00:18:11.339 "data_size": 63488 00:18:11.339 }, 00:18:11.339 { 00:18:11.339 "name": "BaseBdev4", 00:18:11.339 "uuid": "45103c24-21dd-4109-8f8f-4cdb18e62e51", 00:18:11.339 "is_configured": true, 00:18:11.339 "data_offset": 2048, 00:18:11.339 "data_size": 63488 00:18:11.339 } 00:18:11.339 ] 00:18:11.339 }' 00:18:11.339 16:32:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.339 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.599 16:32:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:11.599 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.599 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.599 [2024-11-05 16:32:24.535323] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:11.599 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.599 16:32:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:11.599 16:32:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:11.599 16:32:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:11.599 16:32:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:11.599 16:32:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:11.599 16:32:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:11.599 16:32:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.599 16:32:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.599 16:32:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.599 16:32:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.599 16:32:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.599 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.599 16:32:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:11.599 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.599 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.599 16:32:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.599 "name": "Existed_Raid", 00:18:11.599 "uuid": "b147a6b0-e9aa-457f-9c69-03bee9bccfe7", 00:18:11.599 "strip_size_kb": 64, 00:18:11.599 "state": "configuring", 00:18:11.599 "raid_level": "raid5f", 00:18:11.599 "superblock": true, 00:18:11.599 "num_base_bdevs": 4, 00:18:11.599 "num_base_bdevs_discovered": 2, 00:18:11.599 "num_base_bdevs_operational": 4, 00:18:11.599 "base_bdevs_list": [ 00:18:11.599 { 00:18:11.599 "name": "BaseBdev1", 00:18:11.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.599 "is_configured": false, 00:18:11.599 "data_offset": 0, 00:18:11.599 "data_size": 0 00:18:11.599 }, 00:18:11.599 { 00:18:11.599 "name": null, 00:18:11.599 "uuid": "60e3ca41-c0c4-4f9e-a86a-f5f786553763", 00:18:11.599 "is_configured": false, 00:18:11.599 "data_offset": 0, 00:18:11.599 "data_size": 63488 00:18:11.599 }, 00:18:11.599 { 00:18:11.599 "name": "BaseBdev3", 00:18:11.599 "uuid": "417e12c0-d33c-4ec5-9a09-fe62f020f641", 00:18:11.599 "is_configured": true, 00:18:11.599 "data_offset": 2048, 00:18:11.599 "data_size": 63488 00:18:11.599 }, 00:18:11.599 { 00:18:11.599 "name": "BaseBdev4", 00:18:11.599 "uuid": "45103c24-21dd-4109-8f8f-4cdb18e62e51", 00:18:11.599 "is_configured": true, 00:18:11.599 "data_offset": 2048, 00:18:11.599 "data_size": 63488 00:18:11.599 } 00:18:11.599 ] 00:18:11.599 }' 00:18:11.599 16:32:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.599 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.168 16:32:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.168 16:32:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:12.168 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.168 16:32:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.168 16:32:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.168 16:32:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:12.168 16:32:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:12.168 16:32:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.168 16:32:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.168 [2024-11-05 16:32:25.066057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:12.168 BaseBdev1 00:18:12.168 16:32:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.168 16:32:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:12.168 16:32:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:12.168 16:32:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:12.168 16:32:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:12.168 16:32:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:12.168 16:32:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:12.168 16:32:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:12.168 16:32:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.168 16:32:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.168 16:32:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.168 16:32:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:12.168 16:32:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.168 16:32:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.168 [ 00:18:12.168 { 00:18:12.168 "name": "BaseBdev1", 00:18:12.168 "aliases": [ 00:18:12.168 "c2fd37ab-ee93-4765-a99f-97ee008a54dc" 00:18:12.168 ], 00:18:12.168 "product_name": "Malloc disk", 00:18:12.168 "block_size": 512, 00:18:12.168 "num_blocks": 65536, 00:18:12.168 "uuid": "c2fd37ab-ee93-4765-a99f-97ee008a54dc", 00:18:12.168 "assigned_rate_limits": { 00:18:12.168 "rw_ios_per_sec": 0, 00:18:12.168 "rw_mbytes_per_sec": 0, 00:18:12.168 "r_mbytes_per_sec": 0, 00:18:12.168 "w_mbytes_per_sec": 0 00:18:12.168 }, 00:18:12.168 "claimed": true, 00:18:12.168 "claim_type": "exclusive_write", 00:18:12.168 "zoned": false, 00:18:12.168 "supported_io_types": { 00:18:12.168 "read": true, 00:18:12.168 "write": true, 00:18:12.168 "unmap": true, 00:18:12.168 "flush": true, 00:18:12.168 "reset": true, 00:18:12.168 "nvme_admin": false, 00:18:12.168 "nvme_io": false, 00:18:12.168 "nvme_io_md": false, 00:18:12.168 "write_zeroes": true, 00:18:12.168 "zcopy": true, 00:18:12.168 "get_zone_info": false, 00:18:12.168 "zone_management": false, 00:18:12.168 "zone_append": false, 00:18:12.168 "compare": false, 00:18:12.168 "compare_and_write": false, 00:18:12.168 "abort": true, 00:18:12.168 "seek_hole": false, 00:18:12.168 "seek_data": false, 00:18:12.168 "copy": true, 00:18:12.168 "nvme_iov_md": false 00:18:12.168 }, 00:18:12.168 "memory_domains": [ 00:18:12.168 { 00:18:12.168 "dma_device_id": "system", 00:18:12.168 "dma_device_type": 1 00:18:12.168 }, 00:18:12.168 { 00:18:12.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.168 "dma_device_type": 2 00:18:12.168 } 00:18:12.168 ], 00:18:12.168 "driver_specific": {} 00:18:12.168 } 00:18:12.168 ] 00:18:12.168 16:32:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.168 16:32:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:12.168 16:32:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:12.168 16:32:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:12.168 16:32:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:12.168 16:32:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:12.168 16:32:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:12.168 16:32:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:12.168 16:32:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.168 16:32:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.168 16:32:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.168 16:32:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.168 16:32:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.168 16:32:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.168 16:32:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.168 16:32:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.168 16:32:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.168 16:32:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.168 "name": "Existed_Raid", 00:18:12.168 "uuid": "b147a6b0-e9aa-457f-9c69-03bee9bccfe7", 00:18:12.168 "strip_size_kb": 64, 00:18:12.168 "state": "configuring", 00:18:12.168 "raid_level": "raid5f", 00:18:12.168 "superblock": true, 00:18:12.168 "num_base_bdevs": 4, 00:18:12.168 "num_base_bdevs_discovered": 3, 00:18:12.168 "num_base_bdevs_operational": 4, 00:18:12.168 "base_bdevs_list": [ 00:18:12.168 { 00:18:12.168 "name": "BaseBdev1", 00:18:12.168 "uuid": "c2fd37ab-ee93-4765-a99f-97ee008a54dc", 00:18:12.168 "is_configured": true, 00:18:12.168 "data_offset": 2048, 00:18:12.168 "data_size": 63488 00:18:12.168 }, 00:18:12.168 { 00:18:12.168 "name": null, 00:18:12.168 "uuid": "60e3ca41-c0c4-4f9e-a86a-f5f786553763", 00:18:12.168 "is_configured": false, 00:18:12.168 "data_offset": 0, 00:18:12.168 "data_size": 63488 00:18:12.168 }, 00:18:12.168 { 00:18:12.168 "name": "BaseBdev3", 00:18:12.168 "uuid": "417e12c0-d33c-4ec5-9a09-fe62f020f641", 00:18:12.168 "is_configured": true, 00:18:12.168 "data_offset": 2048, 00:18:12.168 "data_size": 63488 00:18:12.168 }, 00:18:12.168 { 00:18:12.168 "name": "BaseBdev4", 00:18:12.168 "uuid": "45103c24-21dd-4109-8f8f-4cdb18e62e51", 00:18:12.168 "is_configured": true, 00:18:12.168 "data_offset": 2048, 00:18:12.169 "data_size": 63488 00:18:12.169 } 00:18:12.169 ] 00:18:12.169 }' 00:18:12.169 16:32:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.169 16:32:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.738 16:32:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:12.738 16:32:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.738 16:32:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.738 16:32:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.738 16:32:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.738 16:32:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:12.738 16:32:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:12.738 16:32:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.738 16:32:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.738 [2024-11-05 16:32:25.565323] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:12.738 16:32:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.738 16:32:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:12.738 16:32:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:12.738 16:32:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:12.738 16:32:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:12.738 16:32:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:12.738 16:32:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:12.738 16:32:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.738 16:32:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.738 16:32:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.738 16:32:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.738 16:32:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.738 16:32:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.738 16:32:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.738 16:32:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.738 16:32:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.738 16:32:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.738 "name": "Existed_Raid", 00:18:12.738 "uuid": "b147a6b0-e9aa-457f-9c69-03bee9bccfe7", 00:18:12.738 "strip_size_kb": 64, 00:18:12.738 "state": "configuring", 00:18:12.738 "raid_level": "raid5f", 00:18:12.738 "superblock": true, 00:18:12.738 "num_base_bdevs": 4, 00:18:12.738 "num_base_bdevs_discovered": 2, 00:18:12.738 "num_base_bdevs_operational": 4, 00:18:12.738 "base_bdevs_list": [ 00:18:12.738 { 00:18:12.738 "name": "BaseBdev1", 00:18:12.738 "uuid": "c2fd37ab-ee93-4765-a99f-97ee008a54dc", 00:18:12.738 "is_configured": true, 00:18:12.738 "data_offset": 2048, 00:18:12.738 "data_size": 63488 00:18:12.738 }, 00:18:12.738 { 00:18:12.738 "name": null, 00:18:12.738 "uuid": "60e3ca41-c0c4-4f9e-a86a-f5f786553763", 00:18:12.738 "is_configured": false, 00:18:12.738 "data_offset": 0, 00:18:12.738 "data_size": 63488 00:18:12.738 }, 00:18:12.738 { 00:18:12.738 "name": null, 00:18:12.738 "uuid": "417e12c0-d33c-4ec5-9a09-fe62f020f641", 00:18:12.738 "is_configured": false, 00:18:12.738 "data_offset": 0, 00:18:12.738 "data_size": 63488 00:18:12.738 }, 00:18:12.738 { 00:18:12.738 "name": "BaseBdev4", 00:18:12.738 "uuid": "45103c24-21dd-4109-8f8f-4cdb18e62e51", 00:18:12.738 "is_configured": true, 00:18:12.738 "data_offset": 2048, 00:18:12.738 "data_size": 63488 00:18:12.738 } 00:18:12.738 ] 00:18:12.738 }' 00:18:12.738 16:32:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.738 16:32:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.996 16:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.996 16:32:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.996 16:32:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.996 16:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:12.996 16:32:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.255 16:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:13.255 16:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:13.255 16:32:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.255 16:32:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.255 [2024-11-05 16:32:26.120464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:13.255 16:32:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.255 16:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:13.255 16:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:13.255 16:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:13.255 16:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:13.255 16:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:13.255 16:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:13.255 16:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.255 16:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.255 16:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.255 16:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.255 16:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.255 16:32:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.255 16:32:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.255 16:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.255 16:32:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.255 16:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.255 "name": "Existed_Raid", 00:18:13.255 "uuid": "b147a6b0-e9aa-457f-9c69-03bee9bccfe7", 00:18:13.255 "strip_size_kb": 64, 00:18:13.255 "state": "configuring", 00:18:13.255 "raid_level": "raid5f", 00:18:13.255 "superblock": true, 00:18:13.255 "num_base_bdevs": 4, 00:18:13.255 "num_base_bdevs_discovered": 3, 00:18:13.255 "num_base_bdevs_operational": 4, 00:18:13.255 "base_bdevs_list": [ 00:18:13.255 { 00:18:13.255 "name": "BaseBdev1", 00:18:13.255 "uuid": "c2fd37ab-ee93-4765-a99f-97ee008a54dc", 00:18:13.255 "is_configured": true, 00:18:13.255 "data_offset": 2048, 00:18:13.255 "data_size": 63488 00:18:13.255 }, 00:18:13.255 { 00:18:13.255 "name": null, 00:18:13.255 "uuid": "60e3ca41-c0c4-4f9e-a86a-f5f786553763", 00:18:13.255 "is_configured": false, 00:18:13.255 "data_offset": 0, 00:18:13.255 "data_size": 63488 00:18:13.255 }, 00:18:13.255 { 00:18:13.255 "name": "BaseBdev3", 00:18:13.255 "uuid": "417e12c0-d33c-4ec5-9a09-fe62f020f641", 00:18:13.255 "is_configured": true, 00:18:13.255 "data_offset": 2048, 00:18:13.255 "data_size": 63488 00:18:13.255 }, 00:18:13.255 { 00:18:13.255 "name": "BaseBdev4", 00:18:13.255 "uuid": "45103c24-21dd-4109-8f8f-4cdb18e62e51", 00:18:13.255 "is_configured": true, 00:18:13.255 "data_offset": 2048, 00:18:13.255 "data_size": 63488 00:18:13.255 } 00:18:13.255 ] 00:18:13.255 }' 00:18:13.255 16:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.255 16:32:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.515 16:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:13.515 16:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.515 16:32:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.515 16:32:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.515 16:32:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.515 16:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:13.515 16:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:13.515 16:32:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.515 16:32:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.773 [2024-11-05 16:32:26.607693] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:13.773 16:32:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.773 16:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:13.773 16:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:13.773 16:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:13.773 16:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:13.773 16:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:13.773 16:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:13.773 16:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.773 16:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.773 16:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.773 16:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.773 16:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.773 16:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.773 16:32:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.773 16:32:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.773 16:32:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.773 16:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.773 "name": "Existed_Raid", 00:18:13.773 "uuid": "b147a6b0-e9aa-457f-9c69-03bee9bccfe7", 00:18:13.773 "strip_size_kb": 64, 00:18:13.773 "state": "configuring", 00:18:13.773 "raid_level": "raid5f", 00:18:13.773 "superblock": true, 00:18:13.773 "num_base_bdevs": 4, 00:18:13.773 "num_base_bdevs_discovered": 2, 00:18:13.773 "num_base_bdevs_operational": 4, 00:18:13.773 "base_bdevs_list": [ 00:18:13.773 { 00:18:13.773 "name": null, 00:18:13.773 "uuid": "c2fd37ab-ee93-4765-a99f-97ee008a54dc", 00:18:13.773 "is_configured": false, 00:18:13.773 "data_offset": 0, 00:18:13.774 "data_size": 63488 00:18:13.774 }, 00:18:13.774 { 00:18:13.774 "name": null, 00:18:13.774 "uuid": "60e3ca41-c0c4-4f9e-a86a-f5f786553763", 00:18:13.774 "is_configured": false, 00:18:13.774 "data_offset": 0, 00:18:13.774 "data_size": 63488 00:18:13.774 }, 00:18:13.774 { 00:18:13.774 "name": "BaseBdev3", 00:18:13.774 "uuid": "417e12c0-d33c-4ec5-9a09-fe62f020f641", 00:18:13.774 "is_configured": true, 00:18:13.774 "data_offset": 2048, 00:18:13.774 "data_size": 63488 00:18:13.774 }, 00:18:13.774 { 00:18:13.774 "name": "BaseBdev4", 00:18:13.774 "uuid": "45103c24-21dd-4109-8f8f-4cdb18e62e51", 00:18:13.774 "is_configured": true, 00:18:13.774 "data_offset": 2048, 00:18:13.774 "data_size": 63488 00:18:13.774 } 00:18:13.774 ] 00:18:13.774 }' 00:18:13.774 16:32:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.774 16:32:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.341 16:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:14.341 16:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.341 16:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.341 16:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.341 16:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.341 16:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:14.341 16:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:14.341 16:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.341 16:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.341 [2024-11-05 16:32:27.227753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:14.341 16:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.341 16:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:14.341 16:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:14.341 16:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:14.341 16:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:14.341 16:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:14.341 16:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:14.341 16:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.341 16:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.341 16:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.341 16:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.341 16:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.341 16:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.341 16:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.341 16:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.341 16:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.341 16:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.341 "name": "Existed_Raid", 00:18:14.341 "uuid": "b147a6b0-e9aa-457f-9c69-03bee9bccfe7", 00:18:14.341 "strip_size_kb": 64, 00:18:14.341 "state": "configuring", 00:18:14.341 "raid_level": "raid5f", 00:18:14.341 "superblock": true, 00:18:14.341 "num_base_bdevs": 4, 00:18:14.341 "num_base_bdevs_discovered": 3, 00:18:14.341 "num_base_bdevs_operational": 4, 00:18:14.341 "base_bdevs_list": [ 00:18:14.341 { 00:18:14.341 "name": null, 00:18:14.341 "uuid": "c2fd37ab-ee93-4765-a99f-97ee008a54dc", 00:18:14.341 "is_configured": false, 00:18:14.341 "data_offset": 0, 00:18:14.341 "data_size": 63488 00:18:14.341 }, 00:18:14.341 { 00:18:14.341 "name": "BaseBdev2", 00:18:14.341 "uuid": "60e3ca41-c0c4-4f9e-a86a-f5f786553763", 00:18:14.341 "is_configured": true, 00:18:14.341 "data_offset": 2048, 00:18:14.341 "data_size": 63488 00:18:14.341 }, 00:18:14.341 { 00:18:14.341 "name": "BaseBdev3", 00:18:14.341 "uuid": "417e12c0-d33c-4ec5-9a09-fe62f020f641", 00:18:14.341 "is_configured": true, 00:18:14.341 "data_offset": 2048, 00:18:14.341 "data_size": 63488 00:18:14.341 }, 00:18:14.341 { 00:18:14.341 "name": "BaseBdev4", 00:18:14.341 "uuid": "45103c24-21dd-4109-8f8f-4cdb18e62e51", 00:18:14.341 "is_configured": true, 00:18:14.341 "data_offset": 2048, 00:18:14.341 "data_size": 63488 00:18:14.341 } 00:18:14.341 ] 00:18:14.341 }' 00:18:14.341 16:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.341 16:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c2fd37ab-ee93-4765-a99f-97ee008a54dc 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.909 [2024-11-05 16:32:27.839938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:14.909 [2024-11-05 16:32:27.840323] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:14.909 [2024-11-05 16:32:27.840381] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:14.909 [2024-11-05 16:32:27.840731] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:14.909 NewBaseBdev 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.909 [2024-11-05 16:32:27.848286] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:14.909 [2024-11-05 16:32:27.848353] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:14.909 [2024-11-05 16:32:27.848693] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.909 [ 00:18:14.909 { 00:18:14.909 "name": "NewBaseBdev", 00:18:14.909 "aliases": [ 00:18:14.909 "c2fd37ab-ee93-4765-a99f-97ee008a54dc" 00:18:14.909 ], 00:18:14.909 "product_name": "Malloc disk", 00:18:14.909 "block_size": 512, 00:18:14.909 "num_blocks": 65536, 00:18:14.909 "uuid": "c2fd37ab-ee93-4765-a99f-97ee008a54dc", 00:18:14.909 "assigned_rate_limits": { 00:18:14.909 "rw_ios_per_sec": 0, 00:18:14.909 "rw_mbytes_per_sec": 0, 00:18:14.909 "r_mbytes_per_sec": 0, 00:18:14.909 "w_mbytes_per_sec": 0 00:18:14.909 }, 00:18:14.909 "claimed": true, 00:18:14.909 "claim_type": "exclusive_write", 00:18:14.909 "zoned": false, 00:18:14.909 "supported_io_types": { 00:18:14.909 "read": true, 00:18:14.909 "write": true, 00:18:14.909 "unmap": true, 00:18:14.909 "flush": true, 00:18:14.909 "reset": true, 00:18:14.909 "nvme_admin": false, 00:18:14.909 "nvme_io": false, 00:18:14.909 "nvme_io_md": false, 00:18:14.909 "write_zeroes": true, 00:18:14.909 "zcopy": true, 00:18:14.909 "get_zone_info": false, 00:18:14.909 "zone_management": false, 00:18:14.909 "zone_append": false, 00:18:14.909 "compare": false, 00:18:14.909 "compare_and_write": false, 00:18:14.909 "abort": true, 00:18:14.909 "seek_hole": false, 00:18:14.909 "seek_data": false, 00:18:14.909 "copy": true, 00:18:14.909 "nvme_iov_md": false 00:18:14.909 }, 00:18:14.909 "memory_domains": [ 00:18:14.909 { 00:18:14.909 "dma_device_id": "system", 00:18:14.909 "dma_device_type": 1 00:18:14.909 }, 00:18:14.909 { 00:18:14.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.909 "dma_device_type": 2 00:18:14.909 } 00:18:14.909 ], 00:18:14.909 "driver_specific": {} 00:18:14.909 } 00:18:14.909 ] 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.909 16:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.909 "name": "Existed_Raid", 00:18:14.910 "uuid": "b147a6b0-e9aa-457f-9c69-03bee9bccfe7", 00:18:14.910 "strip_size_kb": 64, 00:18:14.910 "state": "online", 00:18:14.910 "raid_level": "raid5f", 00:18:14.910 "superblock": true, 00:18:14.910 "num_base_bdevs": 4, 00:18:14.910 "num_base_bdevs_discovered": 4, 00:18:14.910 "num_base_bdevs_operational": 4, 00:18:14.910 "base_bdevs_list": [ 00:18:14.910 { 00:18:14.910 "name": "NewBaseBdev", 00:18:14.910 "uuid": "c2fd37ab-ee93-4765-a99f-97ee008a54dc", 00:18:14.910 "is_configured": true, 00:18:14.910 "data_offset": 2048, 00:18:14.910 "data_size": 63488 00:18:14.910 }, 00:18:14.910 { 00:18:14.910 "name": "BaseBdev2", 00:18:14.910 "uuid": "60e3ca41-c0c4-4f9e-a86a-f5f786553763", 00:18:14.910 "is_configured": true, 00:18:14.910 "data_offset": 2048, 00:18:14.910 "data_size": 63488 00:18:14.910 }, 00:18:14.910 { 00:18:14.910 "name": "BaseBdev3", 00:18:14.910 "uuid": "417e12c0-d33c-4ec5-9a09-fe62f020f641", 00:18:14.910 "is_configured": true, 00:18:14.910 "data_offset": 2048, 00:18:14.910 "data_size": 63488 00:18:14.910 }, 00:18:14.910 { 00:18:14.910 "name": "BaseBdev4", 00:18:14.910 "uuid": "45103c24-21dd-4109-8f8f-4cdb18e62e51", 00:18:14.910 "is_configured": true, 00:18:14.910 "data_offset": 2048, 00:18:14.910 "data_size": 63488 00:18:14.910 } 00:18:14.910 ] 00:18:14.910 }' 00:18:14.910 16:32:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.910 16:32:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.478 16:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:15.478 16:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:15.478 16:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:15.478 16:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:15.478 16:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:15.478 16:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:15.478 16:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:15.478 16:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.478 16:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.478 16:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:15.478 [2024-11-05 16:32:28.329119] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:15.478 16:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.478 16:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:15.478 "name": "Existed_Raid", 00:18:15.478 "aliases": [ 00:18:15.478 "b147a6b0-e9aa-457f-9c69-03bee9bccfe7" 00:18:15.478 ], 00:18:15.478 "product_name": "Raid Volume", 00:18:15.478 "block_size": 512, 00:18:15.478 "num_blocks": 190464, 00:18:15.478 "uuid": "b147a6b0-e9aa-457f-9c69-03bee9bccfe7", 00:18:15.478 "assigned_rate_limits": { 00:18:15.478 "rw_ios_per_sec": 0, 00:18:15.478 "rw_mbytes_per_sec": 0, 00:18:15.478 "r_mbytes_per_sec": 0, 00:18:15.478 "w_mbytes_per_sec": 0 00:18:15.478 }, 00:18:15.478 "claimed": false, 00:18:15.478 "zoned": false, 00:18:15.478 "supported_io_types": { 00:18:15.478 "read": true, 00:18:15.478 "write": true, 00:18:15.478 "unmap": false, 00:18:15.478 "flush": false, 00:18:15.478 "reset": true, 00:18:15.478 "nvme_admin": false, 00:18:15.478 "nvme_io": false, 00:18:15.478 "nvme_io_md": false, 00:18:15.478 "write_zeroes": true, 00:18:15.478 "zcopy": false, 00:18:15.478 "get_zone_info": false, 00:18:15.478 "zone_management": false, 00:18:15.478 "zone_append": false, 00:18:15.478 "compare": false, 00:18:15.478 "compare_and_write": false, 00:18:15.478 "abort": false, 00:18:15.478 "seek_hole": false, 00:18:15.478 "seek_data": false, 00:18:15.478 "copy": false, 00:18:15.478 "nvme_iov_md": false 00:18:15.478 }, 00:18:15.478 "driver_specific": { 00:18:15.478 "raid": { 00:18:15.478 "uuid": "b147a6b0-e9aa-457f-9c69-03bee9bccfe7", 00:18:15.478 "strip_size_kb": 64, 00:18:15.478 "state": "online", 00:18:15.478 "raid_level": "raid5f", 00:18:15.478 "superblock": true, 00:18:15.478 "num_base_bdevs": 4, 00:18:15.478 "num_base_bdevs_discovered": 4, 00:18:15.478 "num_base_bdevs_operational": 4, 00:18:15.478 "base_bdevs_list": [ 00:18:15.478 { 00:18:15.478 "name": "NewBaseBdev", 00:18:15.478 "uuid": "c2fd37ab-ee93-4765-a99f-97ee008a54dc", 00:18:15.478 "is_configured": true, 00:18:15.478 "data_offset": 2048, 00:18:15.478 "data_size": 63488 00:18:15.478 }, 00:18:15.478 { 00:18:15.478 "name": "BaseBdev2", 00:18:15.478 "uuid": "60e3ca41-c0c4-4f9e-a86a-f5f786553763", 00:18:15.478 "is_configured": true, 00:18:15.478 "data_offset": 2048, 00:18:15.478 "data_size": 63488 00:18:15.478 }, 00:18:15.478 { 00:18:15.478 "name": "BaseBdev3", 00:18:15.478 "uuid": "417e12c0-d33c-4ec5-9a09-fe62f020f641", 00:18:15.478 "is_configured": true, 00:18:15.478 "data_offset": 2048, 00:18:15.478 "data_size": 63488 00:18:15.478 }, 00:18:15.478 { 00:18:15.478 "name": "BaseBdev4", 00:18:15.479 "uuid": "45103c24-21dd-4109-8f8f-4cdb18e62e51", 00:18:15.479 "is_configured": true, 00:18:15.479 "data_offset": 2048, 00:18:15.479 "data_size": 63488 00:18:15.479 } 00:18:15.479 ] 00:18:15.479 } 00:18:15.479 } 00:18:15.479 }' 00:18:15.479 16:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:15.479 16:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:15.479 BaseBdev2 00:18:15.479 BaseBdev3 00:18:15.479 BaseBdev4' 00:18:15.479 16:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:15.479 16:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:15.479 16:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:15.479 16:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:15.479 16:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.479 16:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.479 16:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:15.479 16:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.479 16:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:15.479 16:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:15.479 16:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:15.479 16:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:15.479 16:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.479 16:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:15.479 16:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.479 16:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.479 16:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:15.479 16:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:15.479 16:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:15.479 16:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:15.479 16:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.479 16:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.479 16:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:15.479 16:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.738 16:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:15.739 16:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:15.739 16:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:15.739 16:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:15.739 16:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.739 16:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.739 16:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:15.739 16:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.739 16:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:15.739 16:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:15.739 16:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:15.739 16:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.739 16:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.739 [2024-11-05 16:32:28.660316] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:15.739 [2024-11-05 16:32:28.660354] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:15.739 [2024-11-05 16:32:28.660454] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:15.739 [2024-11-05 16:32:28.660812] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:15.739 [2024-11-05 16:32:28.660826] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:15.739 16:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.739 16:32:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83801 00:18:15.739 16:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 83801 ']' 00:18:15.739 16:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 83801 00:18:15.739 16:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:18:15.739 16:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:15.739 16:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83801 00:18:15.739 16:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:15.739 16:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:15.739 16:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83801' 00:18:15.739 killing process with pid 83801 00:18:15.739 16:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 83801 00:18:15.739 16:32:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 83801 00:18:15.739 [2024-11-05 16:32:28.693227] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:16.306 [2024-11-05 16:32:29.148308] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:17.683 16:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:18:17.683 00:18:17.683 real 0m11.742s 00:18:17.683 user 0m18.426s 00:18:17.683 sys 0m1.996s 00:18:17.683 16:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:17.683 ************************************ 00:18:17.683 END TEST raid5f_state_function_test_sb 00:18:17.683 ************************************ 00:18:17.683 16:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.683 16:32:30 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:18:17.683 16:32:30 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:18:17.683 16:32:30 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:17.683 16:32:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:17.683 ************************************ 00:18:17.683 START TEST raid5f_superblock_test 00:18:17.683 ************************************ 00:18:17.683 16:32:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid5f 4 00:18:17.683 16:32:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:18:17.683 16:32:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:18:17.683 16:32:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:17.683 16:32:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:17.683 16:32:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:17.683 16:32:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:17.683 16:32:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:17.683 16:32:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:17.683 16:32:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:17.683 16:32:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:17.683 16:32:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:17.683 16:32:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:17.683 16:32:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:17.683 16:32:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:18:17.683 16:32:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:18:17.683 16:32:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:18:17.683 16:32:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:17.683 16:32:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84471 00:18:17.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.683 16:32:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84471 00:18:17.683 16:32:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 84471 ']' 00:18:17.683 16:32:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.683 16:32:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:17.683 16:32:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.683 16:32:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:17.683 16:32:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.683 [2024-11-05 16:32:30.514393] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:18:17.683 [2024-11-05 16:32:30.514557] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84471 ] 00:18:17.683 [2024-11-05 16:32:30.694490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.942 [2024-11-05 16:32:30.828687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.202 [2024-11-05 16:32:31.059794] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:18.202 [2024-11-05 16:32:31.059965] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:18.462 16:32:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:18.462 16:32:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:18:18.462 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:18.462 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:18.462 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:18.462 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:18.462 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:18.462 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:18.462 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:18.462 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:18.462 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:18:18.462 16:32:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.462 16:32:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.462 malloc1 00:18:18.462 16:32:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.462 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:18.462 16:32:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.462 16:32:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.462 [2024-11-05 16:32:31.455359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:18.462 [2024-11-05 16:32:31.455499] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.462 [2024-11-05 16:32:31.455604] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:18.462 [2024-11-05 16:32:31.455661] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.462 [2024-11-05 16:32:31.458223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.462 [2024-11-05 16:32:31.458304] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:18.462 pt1 00:18:18.462 16:32:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.462 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:18.462 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:18.462 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:18.462 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:18.462 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:18.462 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:18.462 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:18.462 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:18.462 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:18:18.462 16:32:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.462 16:32:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.462 malloc2 00:18:18.462 16:32:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.462 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:18.462 16:32:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.463 16:32:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.463 [2024-11-05 16:32:31.519253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:18.463 [2024-11-05 16:32:31.519329] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.463 [2024-11-05 16:32:31.519361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:18.463 [2024-11-05 16:32:31.519374] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.463 [2024-11-05 16:32:31.522001] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.463 [2024-11-05 16:32:31.522046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:18.463 pt2 00:18:18.463 16:32:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.463 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:18.463 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:18.463 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:18:18.463 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:18:18.463 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:18.463 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:18.463 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:18.463 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:18.463 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:18:18.463 16:32:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.463 16:32:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.722 malloc3 00:18:18.722 16:32:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.722 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:18.722 16:32:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.722 16:32:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.722 [2024-11-05 16:32:31.605350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:18.722 [2024-11-05 16:32:31.605479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.722 [2024-11-05 16:32:31.605555] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:18.722 [2024-11-05 16:32:31.605646] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.722 [2024-11-05 16:32:31.608178] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.722 [2024-11-05 16:32:31.608271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:18.722 pt3 00:18:18.722 16:32:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.722 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:18.722 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:18.722 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:18:18.722 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:18:18.722 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:18.722 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:18.722 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:18.723 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:18.723 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:18:18.723 16:32:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.723 16:32:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.723 malloc4 00:18:18.723 16:32:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.723 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:18.723 16:32:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.723 16:32:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.723 [2024-11-05 16:32:31.670991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:18.723 [2024-11-05 16:32:31.671106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.723 [2024-11-05 16:32:31.671176] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:18.723 [2024-11-05 16:32:31.671227] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.723 [2024-11-05 16:32:31.673863] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.723 [2024-11-05 16:32:31.673947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:18.723 pt4 00:18:18.723 16:32:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.723 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:18.723 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:18.723 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:18:18.723 16:32:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.723 16:32:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.723 [2024-11-05 16:32:31.683042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:18.723 [2024-11-05 16:32:31.685229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:18.723 [2024-11-05 16:32:31.685304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:18.723 [2024-11-05 16:32:31.685379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:18.723 [2024-11-05 16:32:31.685637] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:18.723 [2024-11-05 16:32:31.685663] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:18.723 [2024-11-05 16:32:31.685968] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:18.723 [2024-11-05 16:32:31.694931] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:18.723 [2024-11-05 16:32:31.694957] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:18.723 [2024-11-05 16:32:31.695199] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:18.723 16:32:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.723 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:18.723 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:18.723 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.723 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:18.723 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:18.723 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:18.723 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.723 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.723 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.723 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.723 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.723 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.723 16:32:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.723 16:32:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.723 16:32:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.723 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.723 "name": "raid_bdev1", 00:18:18.723 "uuid": "2e5af215-30d7-45ef-911e-9cb9bcd2255c", 00:18:18.723 "strip_size_kb": 64, 00:18:18.723 "state": "online", 00:18:18.723 "raid_level": "raid5f", 00:18:18.723 "superblock": true, 00:18:18.723 "num_base_bdevs": 4, 00:18:18.723 "num_base_bdevs_discovered": 4, 00:18:18.723 "num_base_bdevs_operational": 4, 00:18:18.723 "base_bdevs_list": [ 00:18:18.723 { 00:18:18.723 "name": "pt1", 00:18:18.723 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:18.723 "is_configured": true, 00:18:18.723 "data_offset": 2048, 00:18:18.723 "data_size": 63488 00:18:18.723 }, 00:18:18.723 { 00:18:18.723 "name": "pt2", 00:18:18.723 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:18.723 "is_configured": true, 00:18:18.723 "data_offset": 2048, 00:18:18.723 "data_size": 63488 00:18:18.723 }, 00:18:18.723 { 00:18:18.723 "name": "pt3", 00:18:18.723 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:18.723 "is_configured": true, 00:18:18.723 "data_offset": 2048, 00:18:18.723 "data_size": 63488 00:18:18.723 }, 00:18:18.723 { 00:18:18.723 "name": "pt4", 00:18:18.723 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:18.723 "is_configured": true, 00:18:18.723 "data_offset": 2048, 00:18:18.723 "data_size": 63488 00:18:18.723 } 00:18:18.723 ] 00:18:18.723 }' 00:18:18.723 16:32:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.723 16:32:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.290 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:19.290 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:19.290 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:19.290 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:19.290 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:19.290 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:19.290 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:19.290 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.290 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.290 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:19.290 [2024-11-05 16:32:32.144588] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:19.290 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.290 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:19.290 "name": "raid_bdev1", 00:18:19.290 "aliases": [ 00:18:19.290 "2e5af215-30d7-45ef-911e-9cb9bcd2255c" 00:18:19.290 ], 00:18:19.290 "product_name": "Raid Volume", 00:18:19.290 "block_size": 512, 00:18:19.290 "num_blocks": 190464, 00:18:19.290 "uuid": "2e5af215-30d7-45ef-911e-9cb9bcd2255c", 00:18:19.290 "assigned_rate_limits": { 00:18:19.290 "rw_ios_per_sec": 0, 00:18:19.290 "rw_mbytes_per_sec": 0, 00:18:19.290 "r_mbytes_per_sec": 0, 00:18:19.290 "w_mbytes_per_sec": 0 00:18:19.290 }, 00:18:19.290 "claimed": false, 00:18:19.290 "zoned": false, 00:18:19.290 "supported_io_types": { 00:18:19.290 "read": true, 00:18:19.290 "write": true, 00:18:19.290 "unmap": false, 00:18:19.290 "flush": false, 00:18:19.290 "reset": true, 00:18:19.290 "nvme_admin": false, 00:18:19.290 "nvme_io": false, 00:18:19.290 "nvme_io_md": false, 00:18:19.290 "write_zeroes": true, 00:18:19.290 "zcopy": false, 00:18:19.290 "get_zone_info": false, 00:18:19.290 "zone_management": false, 00:18:19.290 "zone_append": false, 00:18:19.290 "compare": false, 00:18:19.290 "compare_and_write": false, 00:18:19.290 "abort": false, 00:18:19.290 "seek_hole": false, 00:18:19.290 "seek_data": false, 00:18:19.290 "copy": false, 00:18:19.290 "nvme_iov_md": false 00:18:19.290 }, 00:18:19.290 "driver_specific": { 00:18:19.290 "raid": { 00:18:19.290 "uuid": "2e5af215-30d7-45ef-911e-9cb9bcd2255c", 00:18:19.290 "strip_size_kb": 64, 00:18:19.290 "state": "online", 00:18:19.290 "raid_level": "raid5f", 00:18:19.290 "superblock": true, 00:18:19.290 "num_base_bdevs": 4, 00:18:19.290 "num_base_bdevs_discovered": 4, 00:18:19.290 "num_base_bdevs_operational": 4, 00:18:19.290 "base_bdevs_list": [ 00:18:19.290 { 00:18:19.290 "name": "pt1", 00:18:19.290 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:19.290 "is_configured": true, 00:18:19.290 "data_offset": 2048, 00:18:19.290 "data_size": 63488 00:18:19.290 }, 00:18:19.290 { 00:18:19.290 "name": "pt2", 00:18:19.290 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:19.290 "is_configured": true, 00:18:19.290 "data_offset": 2048, 00:18:19.290 "data_size": 63488 00:18:19.290 }, 00:18:19.290 { 00:18:19.290 "name": "pt3", 00:18:19.290 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:19.290 "is_configured": true, 00:18:19.290 "data_offset": 2048, 00:18:19.290 "data_size": 63488 00:18:19.290 }, 00:18:19.290 { 00:18:19.290 "name": "pt4", 00:18:19.290 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:19.290 "is_configured": true, 00:18:19.290 "data_offset": 2048, 00:18:19.290 "data_size": 63488 00:18:19.290 } 00:18:19.290 ] 00:18:19.290 } 00:18:19.290 } 00:18:19.290 }' 00:18:19.290 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:19.290 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:19.290 pt2 00:18:19.290 pt3 00:18:19.290 pt4' 00:18:19.290 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:19.290 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:19.290 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:19.291 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:19.291 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.291 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.291 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:19.291 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.291 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:19.291 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:19.291 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:19.291 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:19.291 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:19.291 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.291 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.291 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.291 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:19.291 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:19.291 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:19.291 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:19.291 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:19.291 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.291 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.291 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.550 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:19.550 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:19.550 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:19.550 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:19.550 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:18:19.550 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.550 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.550 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.550 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:19.550 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:19.550 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:19.550 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:19.550 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.550 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.550 [2024-11-05 16:32:32.440035] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:19.550 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.550 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2e5af215-30d7-45ef-911e-9cb9bcd2255c 00:18:19.550 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2e5af215-30d7-45ef-911e-9cb9bcd2255c ']' 00:18:19.550 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:19.550 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.550 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.550 [2024-11-05 16:32:32.467764] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:19.550 [2024-11-05 16:32:32.467794] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:19.551 [2024-11-05 16:32:32.467887] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:19.551 [2024-11-05 16:32:32.467987] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:19.551 [2024-11-05 16:32:32.468004] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.551 [2024-11-05 16:32:32.591636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:19.551 [2024-11-05 16:32:32.593774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:19.551 [2024-11-05 16:32:32.593890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:19.551 [2024-11-05 16:32:32.593936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:19.551 [2024-11-05 16:32:32.593995] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:19.551 [2024-11-05 16:32:32.594051] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:19.551 [2024-11-05 16:32:32.594074] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:19.551 [2024-11-05 16:32:32.594096] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:18:19.551 [2024-11-05 16:32:32.594111] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:19.551 [2024-11-05 16:32:32.594125] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:19.551 request: 00:18:19.551 { 00:18:19.551 "name": "raid_bdev1", 00:18:19.551 "raid_level": "raid5f", 00:18:19.551 "base_bdevs": [ 00:18:19.551 "malloc1", 00:18:19.551 "malloc2", 00:18:19.551 "malloc3", 00:18:19.551 "malloc4" 00:18:19.551 ], 00:18:19.551 "strip_size_kb": 64, 00:18:19.551 "superblock": false, 00:18:19.551 "method": "bdev_raid_create", 00:18:19.551 "req_id": 1 00:18:19.551 } 00:18:19.551 Got JSON-RPC error response 00:18:19.551 response: 00:18:19.551 { 00:18:19.551 "code": -17, 00:18:19.551 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:19.551 } 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.551 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.811 [2024-11-05 16:32:32.639514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:19.811 [2024-11-05 16:32:32.639607] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:19.811 [2024-11-05 16:32:32.639631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:19.811 [2024-11-05 16:32:32.639643] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:19.811 [2024-11-05 16:32:32.642167] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:19.811 [2024-11-05 16:32:32.642216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:19.811 [2024-11-05 16:32:32.642309] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:19.811 [2024-11-05 16:32:32.642382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:19.811 pt1 00:18:19.811 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.811 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:18:19.811 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:19.811 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:19.811 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:19.811 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:19.811 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:19.811 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.811 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.811 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.811 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.812 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.812 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.812 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.812 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.812 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.812 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.812 "name": "raid_bdev1", 00:18:19.812 "uuid": "2e5af215-30d7-45ef-911e-9cb9bcd2255c", 00:18:19.812 "strip_size_kb": 64, 00:18:19.812 "state": "configuring", 00:18:19.812 "raid_level": "raid5f", 00:18:19.812 "superblock": true, 00:18:19.812 "num_base_bdevs": 4, 00:18:19.812 "num_base_bdevs_discovered": 1, 00:18:19.812 "num_base_bdevs_operational": 4, 00:18:19.812 "base_bdevs_list": [ 00:18:19.812 { 00:18:19.812 "name": "pt1", 00:18:19.812 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:19.812 "is_configured": true, 00:18:19.812 "data_offset": 2048, 00:18:19.812 "data_size": 63488 00:18:19.812 }, 00:18:19.812 { 00:18:19.812 "name": null, 00:18:19.812 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:19.812 "is_configured": false, 00:18:19.812 "data_offset": 2048, 00:18:19.812 "data_size": 63488 00:18:19.812 }, 00:18:19.812 { 00:18:19.812 "name": null, 00:18:19.812 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:19.812 "is_configured": false, 00:18:19.812 "data_offset": 2048, 00:18:19.812 "data_size": 63488 00:18:19.812 }, 00:18:19.812 { 00:18:19.812 "name": null, 00:18:19.812 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:19.812 "is_configured": false, 00:18:19.812 "data_offset": 2048, 00:18:19.812 "data_size": 63488 00:18:19.812 } 00:18:19.812 ] 00:18:19.812 }' 00:18:19.812 16:32:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.812 16:32:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.070 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:18:20.070 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:20.070 16:32:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.070 16:32:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.070 [2024-11-05 16:32:33.062795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:20.070 [2024-11-05 16:32:33.062877] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.070 [2024-11-05 16:32:33.062898] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:20.070 [2024-11-05 16:32:33.062911] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.070 [2024-11-05 16:32:33.063421] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.070 [2024-11-05 16:32:33.063460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:20.070 [2024-11-05 16:32:33.063576] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:20.070 [2024-11-05 16:32:33.063612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:20.070 pt2 00:18:20.070 16:32:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.070 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:18:20.070 16:32:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.070 16:32:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.070 [2024-11-05 16:32:33.070778] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:20.070 16:32:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.070 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:18:20.070 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.070 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:20.070 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:20.070 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:20.070 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:20.070 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.070 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.070 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.070 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.070 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.070 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.070 16:32:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.070 16:32:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.070 16:32:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.070 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.070 "name": "raid_bdev1", 00:18:20.070 "uuid": "2e5af215-30d7-45ef-911e-9cb9bcd2255c", 00:18:20.070 "strip_size_kb": 64, 00:18:20.070 "state": "configuring", 00:18:20.070 "raid_level": "raid5f", 00:18:20.070 "superblock": true, 00:18:20.070 "num_base_bdevs": 4, 00:18:20.070 "num_base_bdevs_discovered": 1, 00:18:20.070 "num_base_bdevs_operational": 4, 00:18:20.070 "base_bdevs_list": [ 00:18:20.070 { 00:18:20.070 "name": "pt1", 00:18:20.070 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:20.070 "is_configured": true, 00:18:20.070 "data_offset": 2048, 00:18:20.070 "data_size": 63488 00:18:20.070 }, 00:18:20.070 { 00:18:20.070 "name": null, 00:18:20.070 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:20.070 "is_configured": false, 00:18:20.070 "data_offset": 0, 00:18:20.070 "data_size": 63488 00:18:20.070 }, 00:18:20.070 { 00:18:20.070 "name": null, 00:18:20.070 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:20.070 "is_configured": false, 00:18:20.070 "data_offset": 2048, 00:18:20.070 "data_size": 63488 00:18:20.070 }, 00:18:20.070 { 00:18:20.070 "name": null, 00:18:20.070 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:20.070 "is_configured": false, 00:18:20.070 "data_offset": 2048, 00:18:20.070 "data_size": 63488 00:18:20.070 } 00:18:20.070 ] 00:18:20.070 }' 00:18:20.070 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.070 16:32:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.638 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:20.638 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:20.638 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:20.638 16:32:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.638 16:32:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.638 [2024-11-05 16:32:33.577909] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:20.638 [2024-11-05 16:32:33.578037] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.638 [2024-11-05 16:32:33.578089] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:20.638 [2024-11-05 16:32:33.578122] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.638 [2024-11-05 16:32:33.578709] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.638 [2024-11-05 16:32:33.578786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:20.638 [2024-11-05 16:32:33.578918] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:20.638 [2024-11-05 16:32:33.578975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:20.638 pt2 00:18:20.638 16:32:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.638 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:20.638 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:20.638 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:20.638 16:32:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.638 16:32:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.638 [2024-11-05 16:32:33.589853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:20.638 [2024-11-05 16:32:33.589941] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.638 [2024-11-05 16:32:33.589996] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:20.638 [2024-11-05 16:32:33.590032] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.638 [2024-11-05 16:32:33.590453] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.638 [2024-11-05 16:32:33.590484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:20.638 [2024-11-05 16:32:33.590586] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:20.638 [2024-11-05 16:32:33.590611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:20.638 pt3 00:18:20.638 16:32:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.638 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:20.638 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:20.638 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:20.638 16:32:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.638 16:32:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.638 [2024-11-05 16:32:33.601826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:20.638 [2024-11-05 16:32:33.601882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.638 [2024-11-05 16:32:33.601906] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:20.638 [2024-11-05 16:32:33.601916] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.638 [2024-11-05 16:32:33.602342] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.638 [2024-11-05 16:32:33.602359] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:20.638 [2024-11-05 16:32:33.602436] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:20.638 [2024-11-05 16:32:33.602456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:20.638 [2024-11-05 16:32:33.602638] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:20.638 [2024-11-05 16:32:33.602651] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:20.638 [2024-11-05 16:32:33.602938] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:20.638 pt4 00:18:20.638 [2024-11-05 16:32:33.610976] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:20.638 [2024-11-05 16:32:33.611000] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:20.638 [2024-11-05 16:32:33.611177] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.638 16:32:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.638 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:20.638 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:20.638 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:20.638 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.638 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.638 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:20.638 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:20.638 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:20.638 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.638 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.638 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.638 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.638 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.638 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.638 16:32:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.638 16:32:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.638 16:32:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.638 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.638 "name": "raid_bdev1", 00:18:20.638 "uuid": "2e5af215-30d7-45ef-911e-9cb9bcd2255c", 00:18:20.638 "strip_size_kb": 64, 00:18:20.638 "state": "online", 00:18:20.638 "raid_level": "raid5f", 00:18:20.638 "superblock": true, 00:18:20.638 "num_base_bdevs": 4, 00:18:20.638 "num_base_bdevs_discovered": 4, 00:18:20.638 "num_base_bdevs_operational": 4, 00:18:20.638 "base_bdevs_list": [ 00:18:20.638 { 00:18:20.638 "name": "pt1", 00:18:20.638 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:20.638 "is_configured": true, 00:18:20.638 "data_offset": 2048, 00:18:20.638 "data_size": 63488 00:18:20.638 }, 00:18:20.638 { 00:18:20.638 "name": "pt2", 00:18:20.638 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:20.638 "is_configured": true, 00:18:20.638 "data_offset": 2048, 00:18:20.638 "data_size": 63488 00:18:20.638 }, 00:18:20.638 { 00:18:20.638 "name": "pt3", 00:18:20.638 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:20.638 "is_configured": true, 00:18:20.638 "data_offset": 2048, 00:18:20.638 "data_size": 63488 00:18:20.638 }, 00:18:20.638 { 00:18:20.638 "name": "pt4", 00:18:20.638 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:20.638 "is_configured": true, 00:18:20.638 "data_offset": 2048, 00:18:20.638 "data_size": 63488 00:18:20.638 } 00:18:20.638 ] 00:18:20.638 }' 00:18:20.638 16:32:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.638 16:32:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.207 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:21.207 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:21.207 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:21.207 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:21.207 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:21.207 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:21.207 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:21.207 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.207 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.207 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:21.207 [2024-11-05 16:32:34.087730] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:21.207 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.207 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:21.207 "name": "raid_bdev1", 00:18:21.207 "aliases": [ 00:18:21.207 "2e5af215-30d7-45ef-911e-9cb9bcd2255c" 00:18:21.207 ], 00:18:21.207 "product_name": "Raid Volume", 00:18:21.207 "block_size": 512, 00:18:21.207 "num_blocks": 190464, 00:18:21.207 "uuid": "2e5af215-30d7-45ef-911e-9cb9bcd2255c", 00:18:21.207 "assigned_rate_limits": { 00:18:21.207 "rw_ios_per_sec": 0, 00:18:21.207 "rw_mbytes_per_sec": 0, 00:18:21.207 "r_mbytes_per_sec": 0, 00:18:21.207 "w_mbytes_per_sec": 0 00:18:21.207 }, 00:18:21.207 "claimed": false, 00:18:21.207 "zoned": false, 00:18:21.207 "supported_io_types": { 00:18:21.207 "read": true, 00:18:21.207 "write": true, 00:18:21.207 "unmap": false, 00:18:21.207 "flush": false, 00:18:21.207 "reset": true, 00:18:21.207 "nvme_admin": false, 00:18:21.207 "nvme_io": false, 00:18:21.207 "nvme_io_md": false, 00:18:21.207 "write_zeroes": true, 00:18:21.207 "zcopy": false, 00:18:21.207 "get_zone_info": false, 00:18:21.207 "zone_management": false, 00:18:21.207 "zone_append": false, 00:18:21.207 "compare": false, 00:18:21.207 "compare_and_write": false, 00:18:21.207 "abort": false, 00:18:21.207 "seek_hole": false, 00:18:21.207 "seek_data": false, 00:18:21.207 "copy": false, 00:18:21.207 "nvme_iov_md": false 00:18:21.207 }, 00:18:21.207 "driver_specific": { 00:18:21.207 "raid": { 00:18:21.207 "uuid": "2e5af215-30d7-45ef-911e-9cb9bcd2255c", 00:18:21.207 "strip_size_kb": 64, 00:18:21.207 "state": "online", 00:18:21.207 "raid_level": "raid5f", 00:18:21.207 "superblock": true, 00:18:21.207 "num_base_bdevs": 4, 00:18:21.207 "num_base_bdevs_discovered": 4, 00:18:21.207 "num_base_bdevs_operational": 4, 00:18:21.207 "base_bdevs_list": [ 00:18:21.207 { 00:18:21.207 "name": "pt1", 00:18:21.207 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:21.207 "is_configured": true, 00:18:21.207 "data_offset": 2048, 00:18:21.207 "data_size": 63488 00:18:21.207 }, 00:18:21.207 { 00:18:21.207 "name": "pt2", 00:18:21.207 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:21.207 "is_configured": true, 00:18:21.207 "data_offset": 2048, 00:18:21.207 "data_size": 63488 00:18:21.207 }, 00:18:21.207 { 00:18:21.207 "name": "pt3", 00:18:21.207 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:21.207 "is_configured": true, 00:18:21.207 "data_offset": 2048, 00:18:21.207 "data_size": 63488 00:18:21.207 }, 00:18:21.207 { 00:18:21.207 "name": "pt4", 00:18:21.207 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:21.207 "is_configured": true, 00:18:21.207 "data_offset": 2048, 00:18:21.207 "data_size": 63488 00:18:21.207 } 00:18:21.207 ] 00:18:21.207 } 00:18:21.207 } 00:18:21.207 }' 00:18:21.207 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:21.207 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:21.207 pt2 00:18:21.207 pt3 00:18:21.207 pt4' 00:18:21.207 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:21.207 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:21.207 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:21.207 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:21.207 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:21.207 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.207 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.207 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.207 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:21.207 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:21.207 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:21.207 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:21.207 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.207 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.207 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:21.207 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.468 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:21.468 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:21.468 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:21.468 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:21.468 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.468 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.468 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:21.468 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.468 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:21.468 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:21.468 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:21.468 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:18:21.468 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:21.468 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.468 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.468 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.468 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:21.468 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:21.468 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:21.469 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:21.469 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.469 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.469 [2024-11-05 16:32:34.415087] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:21.469 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.469 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2e5af215-30d7-45ef-911e-9cb9bcd2255c '!=' 2e5af215-30d7-45ef-911e-9cb9bcd2255c ']' 00:18:21.469 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:18:21.469 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:21.469 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:21.469 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:21.469 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.469 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.469 [2024-11-05 16:32:34.458871] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:21.469 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.469 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:21.469 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:21.469 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:21.469 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:21.469 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:21.469 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:21.469 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.469 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.469 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.469 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.469 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.469 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.469 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.469 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.469 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.469 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.469 "name": "raid_bdev1", 00:18:21.469 "uuid": "2e5af215-30d7-45ef-911e-9cb9bcd2255c", 00:18:21.469 "strip_size_kb": 64, 00:18:21.469 "state": "online", 00:18:21.469 "raid_level": "raid5f", 00:18:21.469 "superblock": true, 00:18:21.469 "num_base_bdevs": 4, 00:18:21.469 "num_base_bdevs_discovered": 3, 00:18:21.469 "num_base_bdevs_operational": 3, 00:18:21.469 "base_bdevs_list": [ 00:18:21.469 { 00:18:21.469 "name": null, 00:18:21.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.469 "is_configured": false, 00:18:21.469 "data_offset": 0, 00:18:21.469 "data_size": 63488 00:18:21.469 }, 00:18:21.469 { 00:18:21.469 "name": "pt2", 00:18:21.469 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:21.469 "is_configured": true, 00:18:21.469 "data_offset": 2048, 00:18:21.469 "data_size": 63488 00:18:21.469 }, 00:18:21.469 { 00:18:21.469 "name": "pt3", 00:18:21.469 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:21.469 "is_configured": true, 00:18:21.469 "data_offset": 2048, 00:18:21.469 "data_size": 63488 00:18:21.469 }, 00:18:21.469 { 00:18:21.469 "name": "pt4", 00:18:21.469 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:21.469 "is_configured": true, 00:18:21.469 "data_offset": 2048, 00:18:21.469 "data_size": 63488 00:18:21.469 } 00:18:21.469 ] 00:18:21.469 }' 00:18:21.469 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.469 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.071 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:22.071 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.071 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.071 [2024-11-05 16:32:34.910093] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:22.071 [2024-11-05 16:32:34.910126] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:22.071 [2024-11-05 16:32:34.910207] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:22.071 [2024-11-05 16:32:34.910289] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:22.071 [2024-11-05 16:32:34.910298] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:22.071 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.071 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.071 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.071 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.071 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:22.071 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.071 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:22.071 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:22.071 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:22.071 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:22.071 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:22.071 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.071 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.071 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.071 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:22.071 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:22.071 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:18:22.071 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.071 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.071 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.071 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:22.071 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:22.071 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:18:22.071 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.071 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.071 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.071 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:22.071 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:22.071 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:22.071 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:22.071 16:32:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:22.071 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.071 16:32:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.071 [2024-11-05 16:32:34.997968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:22.071 [2024-11-05 16:32:34.998043] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.071 [2024-11-05 16:32:34.998065] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:22.071 [2024-11-05 16:32:34.998075] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.071 [2024-11-05 16:32:35.000525] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.071 [2024-11-05 16:32:35.000634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:22.071 [2024-11-05 16:32:35.000759] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:22.071 [2024-11-05 16:32:35.000822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:22.071 pt2 00:18:22.071 16:32:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.071 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:22.071 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:22.071 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:22.071 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:22.071 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:22.071 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:22.071 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.071 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.071 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.071 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.071 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.071 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.071 16:32:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.071 16:32:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.071 16:32:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.071 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.071 "name": "raid_bdev1", 00:18:22.071 "uuid": "2e5af215-30d7-45ef-911e-9cb9bcd2255c", 00:18:22.071 "strip_size_kb": 64, 00:18:22.071 "state": "configuring", 00:18:22.071 "raid_level": "raid5f", 00:18:22.071 "superblock": true, 00:18:22.071 "num_base_bdevs": 4, 00:18:22.071 "num_base_bdevs_discovered": 1, 00:18:22.071 "num_base_bdevs_operational": 3, 00:18:22.071 "base_bdevs_list": [ 00:18:22.071 { 00:18:22.071 "name": null, 00:18:22.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.071 "is_configured": false, 00:18:22.071 "data_offset": 2048, 00:18:22.071 "data_size": 63488 00:18:22.071 }, 00:18:22.072 { 00:18:22.072 "name": "pt2", 00:18:22.072 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:22.072 "is_configured": true, 00:18:22.072 "data_offset": 2048, 00:18:22.072 "data_size": 63488 00:18:22.072 }, 00:18:22.072 { 00:18:22.072 "name": null, 00:18:22.072 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:22.072 "is_configured": false, 00:18:22.072 "data_offset": 2048, 00:18:22.072 "data_size": 63488 00:18:22.072 }, 00:18:22.072 { 00:18:22.072 "name": null, 00:18:22.072 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:22.072 "is_configured": false, 00:18:22.072 "data_offset": 2048, 00:18:22.072 "data_size": 63488 00:18:22.072 } 00:18:22.072 ] 00:18:22.072 }' 00:18:22.072 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.072 16:32:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.641 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:22.641 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:22.641 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:22.641 16:32:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.641 16:32:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.641 [2024-11-05 16:32:35.433245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:22.641 [2024-11-05 16:32:35.433362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.641 [2024-11-05 16:32:35.433419] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:22.641 [2024-11-05 16:32:35.433454] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.641 [2024-11-05 16:32:35.433980] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.641 [2024-11-05 16:32:35.434047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:22.641 [2024-11-05 16:32:35.434175] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:22.641 [2024-11-05 16:32:35.434239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:22.641 pt3 00:18:22.641 16:32:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.641 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:22.641 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:22.641 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:22.641 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:22.641 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:22.641 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:22.641 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.641 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.641 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.641 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.641 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.641 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.641 16:32:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.641 16:32:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.641 16:32:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.641 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.641 "name": "raid_bdev1", 00:18:22.641 "uuid": "2e5af215-30d7-45ef-911e-9cb9bcd2255c", 00:18:22.641 "strip_size_kb": 64, 00:18:22.641 "state": "configuring", 00:18:22.641 "raid_level": "raid5f", 00:18:22.641 "superblock": true, 00:18:22.641 "num_base_bdevs": 4, 00:18:22.642 "num_base_bdevs_discovered": 2, 00:18:22.642 "num_base_bdevs_operational": 3, 00:18:22.642 "base_bdevs_list": [ 00:18:22.642 { 00:18:22.642 "name": null, 00:18:22.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.642 "is_configured": false, 00:18:22.642 "data_offset": 2048, 00:18:22.642 "data_size": 63488 00:18:22.642 }, 00:18:22.642 { 00:18:22.642 "name": "pt2", 00:18:22.642 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:22.642 "is_configured": true, 00:18:22.642 "data_offset": 2048, 00:18:22.642 "data_size": 63488 00:18:22.642 }, 00:18:22.642 { 00:18:22.642 "name": "pt3", 00:18:22.642 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:22.642 "is_configured": true, 00:18:22.642 "data_offset": 2048, 00:18:22.642 "data_size": 63488 00:18:22.642 }, 00:18:22.642 { 00:18:22.642 "name": null, 00:18:22.642 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:22.642 "is_configured": false, 00:18:22.642 "data_offset": 2048, 00:18:22.642 "data_size": 63488 00:18:22.642 } 00:18:22.642 ] 00:18:22.642 }' 00:18:22.642 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.642 16:32:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.901 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:22.901 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:22.901 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:18:22.901 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:22.901 16:32:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.901 16:32:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.901 [2024-11-05 16:32:35.916474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:22.901 [2024-11-05 16:32:35.916570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.901 [2024-11-05 16:32:35.916601] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:18:22.901 [2024-11-05 16:32:35.916612] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.901 [2024-11-05 16:32:35.917124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.901 [2024-11-05 16:32:35.917151] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:22.901 [2024-11-05 16:32:35.917248] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:22.901 [2024-11-05 16:32:35.917273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:22.901 [2024-11-05 16:32:35.917433] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:22.901 [2024-11-05 16:32:35.917443] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:22.901 [2024-11-05 16:32:35.917747] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:22.901 [2024-11-05 16:32:35.926452] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:22.901 [2024-11-05 16:32:35.926481] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:22.901 [2024-11-05 16:32:35.926867] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:22.901 pt4 00:18:22.901 16:32:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.901 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:22.901 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:22.901 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:22.901 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:22.901 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:22.901 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:22.901 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.901 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.901 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.901 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.901 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.901 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.901 16:32:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.901 16:32:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.901 16:32:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.901 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.901 "name": "raid_bdev1", 00:18:22.901 "uuid": "2e5af215-30d7-45ef-911e-9cb9bcd2255c", 00:18:22.901 "strip_size_kb": 64, 00:18:22.901 "state": "online", 00:18:22.901 "raid_level": "raid5f", 00:18:22.901 "superblock": true, 00:18:22.901 "num_base_bdevs": 4, 00:18:22.901 "num_base_bdevs_discovered": 3, 00:18:22.901 "num_base_bdevs_operational": 3, 00:18:22.901 "base_bdevs_list": [ 00:18:22.901 { 00:18:22.901 "name": null, 00:18:22.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.901 "is_configured": false, 00:18:22.901 "data_offset": 2048, 00:18:22.901 "data_size": 63488 00:18:22.901 }, 00:18:22.901 { 00:18:22.901 "name": "pt2", 00:18:22.901 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:22.901 "is_configured": true, 00:18:22.901 "data_offset": 2048, 00:18:22.901 "data_size": 63488 00:18:22.901 }, 00:18:22.901 { 00:18:22.901 "name": "pt3", 00:18:22.901 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:22.901 "is_configured": true, 00:18:22.901 "data_offset": 2048, 00:18:22.901 "data_size": 63488 00:18:22.902 }, 00:18:22.902 { 00:18:22.902 "name": "pt4", 00:18:22.902 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:22.902 "is_configured": true, 00:18:22.902 "data_offset": 2048, 00:18:22.902 "data_size": 63488 00:18:22.902 } 00:18:22.902 ] 00:18:22.902 }' 00:18:22.902 16:32:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.902 16:32:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.471 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:23.471 16:32:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.471 16:32:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.471 [2024-11-05 16:32:36.361525] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:23.471 [2024-11-05 16:32:36.361630] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:23.471 [2024-11-05 16:32:36.361730] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:23.471 [2024-11-05 16:32:36.361817] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:23.471 [2024-11-05 16:32:36.361830] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:23.471 16:32:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.471 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.471 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:23.471 16:32:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.471 16:32:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.471 16:32:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.471 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:23.471 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:23.471 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:18:23.471 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:18:23.471 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:18:23.472 16:32:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.472 16:32:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.472 16:32:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.472 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:23.472 16:32:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.472 16:32:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.472 [2024-11-05 16:32:36.429389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:23.472 [2024-11-05 16:32:36.429468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.472 [2024-11-05 16:32:36.429501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:18:23.472 [2024-11-05 16:32:36.429515] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.472 [2024-11-05 16:32:36.432254] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.472 pt1 00:18:23.472 [2024-11-05 16:32:36.432348] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:23.472 [2024-11-05 16:32:36.432462] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:23.472 [2024-11-05 16:32:36.432550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:23.472 [2024-11-05 16:32:36.432731] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:23.472 [2024-11-05 16:32:36.432747] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:23.472 [2024-11-05 16:32:36.432766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:23.472 [2024-11-05 16:32:36.432852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:23.472 [2024-11-05 16:32:36.432980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:23.472 16:32:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.472 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:18:23.472 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:23.472 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:23.472 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:23.472 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:23.472 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:23.472 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:23.472 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.472 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.472 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.472 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.472 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.472 16:32:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.472 16:32:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.472 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.472 16:32:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.472 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.472 "name": "raid_bdev1", 00:18:23.472 "uuid": "2e5af215-30d7-45ef-911e-9cb9bcd2255c", 00:18:23.472 "strip_size_kb": 64, 00:18:23.472 "state": "configuring", 00:18:23.472 "raid_level": "raid5f", 00:18:23.472 "superblock": true, 00:18:23.472 "num_base_bdevs": 4, 00:18:23.472 "num_base_bdevs_discovered": 2, 00:18:23.472 "num_base_bdevs_operational": 3, 00:18:23.472 "base_bdevs_list": [ 00:18:23.472 { 00:18:23.472 "name": null, 00:18:23.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.472 "is_configured": false, 00:18:23.472 "data_offset": 2048, 00:18:23.472 "data_size": 63488 00:18:23.472 }, 00:18:23.472 { 00:18:23.472 "name": "pt2", 00:18:23.472 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:23.472 "is_configured": true, 00:18:23.472 "data_offset": 2048, 00:18:23.472 "data_size": 63488 00:18:23.472 }, 00:18:23.472 { 00:18:23.472 "name": "pt3", 00:18:23.472 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:23.472 "is_configured": true, 00:18:23.472 "data_offset": 2048, 00:18:23.472 "data_size": 63488 00:18:23.472 }, 00:18:23.472 { 00:18:23.472 "name": null, 00:18:23.472 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:23.472 "is_configured": false, 00:18:23.472 "data_offset": 2048, 00:18:23.472 "data_size": 63488 00:18:23.472 } 00:18:23.472 ] 00:18:23.472 }' 00:18:23.472 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.472 16:32:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.042 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:24.042 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:18:24.042 16:32:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.042 16:32:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.042 16:32:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.042 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:18:24.042 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:24.042 16:32:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.042 16:32:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.042 [2024-11-05 16:32:36.908618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:24.042 [2024-11-05 16:32:36.908689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:24.042 [2024-11-05 16:32:36.908721] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:18:24.042 [2024-11-05 16:32:36.908732] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:24.042 [2024-11-05 16:32:36.909255] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:24.042 [2024-11-05 16:32:36.909276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:24.042 [2024-11-05 16:32:36.909379] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:24.042 [2024-11-05 16:32:36.909412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:24.042 [2024-11-05 16:32:36.909603] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:24.042 [2024-11-05 16:32:36.909614] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:24.042 [2024-11-05 16:32:36.909945] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:24.042 [2024-11-05 16:32:36.918708] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:24.042 [2024-11-05 16:32:36.918736] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:24.042 [2024-11-05 16:32:36.919032] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:24.042 pt4 00:18:24.042 16:32:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.042 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:24.042 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:24.042 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:24.042 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:24.042 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:24.042 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:24.042 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.042 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.042 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.042 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.042 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.042 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.042 16:32:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.042 16:32:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.042 16:32:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.042 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.042 "name": "raid_bdev1", 00:18:24.042 "uuid": "2e5af215-30d7-45ef-911e-9cb9bcd2255c", 00:18:24.042 "strip_size_kb": 64, 00:18:24.042 "state": "online", 00:18:24.042 "raid_level": "raid5f", 00:18:24.042 "superblock": true, 00:18:24.042 "num_base_bdevs": 4, 00:18:24.042 "num_base_bdevs_discovered": 3, 00:18:24.042 "num_base_bdevs_operational": 3, 00:18:24.042 "base_bdevs_list": [ 00:18:24.042 { 00:18:24.042 "name": null, 00:18:24.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.042 "is_configured": false, 00:18:24.042 "data_offset": 2048, 00:18:24.042 "data_size": 63488 00:18:24.042 }, 00:18:24.042 { 00:18:24.042 "name": "pt2", 00:18:24.042 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:24.042 "is_configured": true, 00:18:24.042 "data_offset": 2048, 00:18:24.042 "data_size": 63488 00:18:24.042 }, 00:18:24.042 { 00:18:24.042 "name": "pt3", 00:18:24.042 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:24.042 "is_configured": true, 00:18:24.042 "data_offset": 2048, 00:18:24.042 "data_size": 63488 00:18:24.042 }, 00:18:24.042 { 00:18:24.042 "name": "pt4", 00:18:24.042 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:24.042 "is_configured": true, 00:18:24.042 "data_offset": 2048, 00:18:24.042 "data_size": 63488 00:18:24.042 } 00:18:24.042 ] 00:18:24.042 }' 00:18:24.042 16:32:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.042 16:32:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.612 16:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:24.612 16:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:24.612 16:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.612 16:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.612 16:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.612 16:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:24.612 16:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:24.612 16:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.612 16:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.612 16:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:24.612 [2024-11-05 16:32:37.457074] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:24.612 16:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.612 16:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 2e5af215-30d7-45ef-911e-9cb9bcd2255c '!=' 2e5af215-30d7-45ef-911e-9cb9bcd2255c ']' 00:18:24.612 16:32:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84471 00:18:24.612 16:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 84471 ']' 00:18:24.612 16:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # kill -0 84471 00:18:24.612 16:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # uname 00:18:24.612 16:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:24.612 16:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84471 00:18:24.612 16:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:24.612 killing process with pid 84471 00:18:24.612 16:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:24.612 16:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84471' 00:18:24.612 16:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # kill 84471 00:18:24.612 [2024-11-05 16:32:37.530973] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:24.612 [2024-11-05 16:32:37.531084] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:24.612 16:32:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@976 -- # wait 84471 00:18:24.612 [2024-11-05 16:32:37.531176] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:24.612 [2024-11-05 16:32:37.531190] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:25.180 [2024-11-05 16:32:37.973270] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:26.560 16:32:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:18:26.560 00:18:26.560 real 0m8.796s 00:18:26.560 user 0m13.784s 00:18:26.560 sys 0m1.468s 00:18:26.560 16:32:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:26.560 ************************************ 00:18:26.560 END TEST raid5f_superblock_test 00:18:26.560 ************************************ 00:18:26.560 16:32:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.560 16:32:39 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:18:26.560 16:32:39 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:18:26.560 16:32:39 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:18:26.560 16:32:39 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:26.560 16:32:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:26.560 ************************************ 00:18:26.560 START TEST raid5f_rebuild_test 00:18:26.560 ************************************ 00:18:26.560 16:32:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 4 false false true 00:18:26.560 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:26.560 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:26.560 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:18:26.560 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:26.560 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:26.560 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:26.560 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:26.560 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:26.560 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:26.560 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:26.560 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:26.560 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:26.560 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:26.560 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:26.560 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:26.560 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:26.560 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:26.560 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:26.560 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:26.560 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:26.560 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:26.560 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:26.560 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:26.560 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:26.560 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:26.560 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:26.560 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:26.560 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:26.560 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:26.560 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:26.560 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:18:26.560 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84958 00:18:26.560 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:26.560 16:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84958 00:18:26.560 16:32:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 84958 ']' 00:18:26.560 16:32:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.560 16:32:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:26.560 16:32:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.560 16:32:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:26.560 16:32:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.560 [2024-11-05 16:32:39.397338] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:18:26.560 [2024-11-05 16:32:39.397562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:18:26.560 Zero copy mechanism will not be used. 00:18:26.560 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84958 ] 00:18:26.560 [2024-11-05 16:32:39.571716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.818 [2024-11-05 16:32:39.693906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.076 [2024-11-05 16:32:39.917441] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:27.076 [2024-11-05 16:32:39.917559] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:27.362 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:27.362 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:18:27.362 16:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:27.362 16:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:27.362 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.362 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.362 BaseBdev1_malloc 00:18:27.362 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.362 16:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:27.362 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.362 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.362 [2024-11-05 16:32:40.310396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:27.362 [2024-11-05 16:32:40.310475] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.362 [2024-11-05 16:32:40.310499] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:27.362 [2024-11-05 16:32:40.310512] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.362 [2024-11-05 16:32:40.312920] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.362 [2024-11-05 16:32:40.312969] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:27.362 BaseBdev1 00:18:27.362 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.362 16:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:27.362 16:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:27.362 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.362 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.362 BaseBdev2_malloc 00:18:27.362 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.362 16:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:27.362 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.362 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.362 [2024-11-05 16:32:40.371279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:27.362 [2024-11-05 16:32:40.371356] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.362 [2024-11-05 16:32:40.371377] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:27.362 [2024-11-05 16:32:40.371391] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.362 [2024-11-05 16:32:40.373821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.362 [2024-11-05 16:32:40.373953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:27.362 BaseBdev2 00:18:27.362 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.362 16:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:27.362 16:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:27.362 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.362 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.623 BaseBdev3_malloc 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.623 [2024-11-05 16:32:40.444477] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:27.623 [2024-11-05 16:32:40.444615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.623 [2024-11-05 16:32:40.444644] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:27.623 [2024-11-05 16:32:40.444658] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.623 [2024-11-05 16:32:40.447073] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.623 [2024-11-05 16:32:40.447124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:27.623 BaseBdev3 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.623 BaseBdev4_malloc 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.623 [2024-11-05 16:32:40.502542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:27.623 [2024-11-05 16:32:40.502697] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.623 [2024-11-05 16:32:40.502727] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:27.623 [2024-11-05 16:32:40.502741] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.623 [2024-11-05 16:32:40.505004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.623 [2024-11-05 16:32:40.505048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:27.623 BaseBdev4 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.623 spare_malloc 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.623 spare_delay 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.623 [2024-11-05 16:32:40.571957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:27.623 [2024-11-05 16:32:40.572021] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.623 [2024-11-05 16:32:40.572074] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:27.623 [2024-11-05 16:32:40.572085] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.623 [2024-11-05 16:32:40.574393] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.623 [2024-11-05 16:32:40.574432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:27.623 spare 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.623 [2024-11-05 16:32:40.583991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:27.623 [2024-11-05 16:32:40.586105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:27.623 [2024-11-05 16:32:40.586173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:27.623 [2024-11-05 16:32:40.586230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:27.623 [2024-11-05 16:32:40.586330] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:27.623 [2024-11-05 16:32:40.586344] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:27.623 [2024-11-05 16:32:40.586709] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:27.623 [2024-11-05 16:32:40.594985] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:27.623 [2024-11-05 16:32:40.595062] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:27.623 [2024-11-05 16:32:40.595333] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.623 "name": "raid_bdev1", 00:18:27.623 "uuid": "f1e5e1ab-56d4-4fdf-9b2f-011641ce1b68", 00:18:27.623 "strip_size_kb": 64, 00:18:27.623 "state": "online", 00:18:27.623 "raid_level": "raid5f", 00:18:27.623 "superblock": false, 00:18:27.623 "num_base_bdevs": 4, 00:18:27.623 "num_base_bdevs_discovered": 4, 00:18:27.623 "num_base_bdevs_operational": 4, 00:18:27.623 "base_bdevs_list": [ 00:18:27.623 { 00:18:27.623 "name": "BaseBdev1", 00:18:27.623 "uuid": "7211a78e-21e4-5202-9e04-974b842de62f", 00:18:27.623 "is_configured": true, 00:18:27.623 "data_offset": 0, 00:18:27.623 "data_size": 65536 00:18:27.623 }, 00:18:27.623 { 00:18:27.623 "name": "BaseBdev2", 00:18:27.623 "uuid": "4ca4cccc-3c72-5021-8084-7d6d7554c368", 00:18:27.623 "is_configured": true, 00:18:27.623 "data_offset": 0, 00:18:27.623 "data_size": 65536 00:18:27.623 }, 00:18:27.623 { 00:18:27.623 "name": "BaseBdev3", 00:18:27.623 "uuid": "1839758a-9607-5ac2-943e-06180632828a", 00:18:27.623 "is_configured": true, 00:18:27.623 "data_offset": 0, 00:18:27.623 "data_size": 65536 00:18:27.623 }, 00:18:27.623 { 00:18:27.623 "name": "BaseBdev4", 00:18:27.623 "uuid": "8ce4e51e-4151-58be-80c1-af41a386ad53", 00:18:27.623 "is_configured": true, 00:18:27.623 "data_offset": 0, 00:18:27.623 "data_size": 65536 00:18:27.623 } 00:18:27.623 ] 00:18:27.623 }' 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.623 16:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.192 16:32:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:28.192 16:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.192 16:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.192 16:32:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:28.192 [2024-11-05 16:32:41.060131] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:28.192 16:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.192 16:32:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:18:28.192 16:32:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:28.192 16:32:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.192 16:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.192 16:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.192 16:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.192 16:32:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:18:28.192 16:32:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:28.192 16:32:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:28.192 16:32:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:28.192 16:32:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:28.192 16:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:28.192 16:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:28.192 16:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:28.192 16:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:28.192 16:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:28.192 16:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:28.192 16:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:28.192 16:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:28.192 16:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:28.451 [2024-11-05 16:32:41.327514] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:28.451 /dev/nbd0 00:18:28.451 16:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:28.451 16:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:28.451 16:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:28.451 16:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:18:28.451 16:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:28.451 16:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:28.451 16:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:28.451 16:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:18:28.451 16:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:28.451 16:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:28.452 16:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:28.452 1+0 records in 00:18:28.452 1+0 records out 00:18:28.452 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404525 s, 10.1 MB/s 00:18:28.452 16:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:28.452 16:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:18:28.452 16:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:28.452 16:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:28.452 16:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:18:28.452 16:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:28.452 16:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:28.452 16:32:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:28.452 16:32:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:18:28.452 16:32:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:18:28.452 16:32:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:18:29.019 512+0 records in 00:18:29.019 512+0 records out 00:18:29.019 100663296 bytes (101 MB, 96 MiB) copied, 0.515334 s, 195 MB/s 00:18:29.019 16:32:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:29.019 16:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:29.019 16:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:29.019 16:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:29.019 16:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:29.019 16:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:29.019 16:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:29.279 16:32:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:29.279 [2024-11-05 16:32:42.147722] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:29.279 16:32:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:29.279 16:32:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:29.279 16:32:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:29.279 16:32:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:29.279 16:32:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:29.279 16:32:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:29.279 16:32:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:29.279 16:32:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:29.279 16:32:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.279 16:32:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.279 [2024-11-05 16:32:42.166918] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:29.279 16:32:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.279 16:32:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:29.279 16:32:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:29.279 16:32:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:29.279 16:32:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:29.279 16:32:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:29.279 16:32:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:29.279 16:32:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.279 16:32:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.279 16:32:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.279 16:32:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.279 16:32:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.279 16:32:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.279 16:32:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.279 16:32:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.279 16:32:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.279 16:32:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.279 "name": "raid_bdev1", 00:18:29.279 "uuid": "f1e5e1ab-56d4-4fdf-9b2f-011641ce1b68", 00:18:29.279 "strip_size_kb": 64, 00:18:29.279 "state": "online", 00:18:29.279 "raid_level": "raid5f", 00:18:29.279 "superblock": false, 00:18:29.279 "num_base_bdevs": 4, 00:18:29.279 "num_base_bdevs_discovered": 3, 00:18:29.279 "num_base_bdevs_operational": 3, 00:18:29.279 "base_bdevs_list": [ 00:18:29.279 { 00:18:29.279 "name": null, 00:18:29.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.279 "is_configured": false, 00:18:29.279 "data_offset": 0, 00:18:29.279 "data_size": 65536 00:18:29.279 }, 00:18:29.279 { 00:18:29.279 "name": "BaseBdev2", 00:18:29.279 "uuid": "4ca4cccc-3c72-5021-8084-7d6d7554c368", 00:18:29.280 "is_configured": true, 00:18:29.280 "data_offset": 0, 00:18:29.280 "data_size": 65536 00:18:29.280 }, 00:18:29.280 { 00:18:29.280 "name": "BaseBdev3", 00:18:29.280 "uuid": "1839758a-9607-5ac2-943e-06180632828a", 00:18:29.280 "is_configured": true, 00:18:29.280 "data_offset": 0, 00:18:29.280 "data_size": 65536 00:18:29.280 }, 00:18:29.280 { 00:18:29.280 "name": "BaseBdev4", 00:18:29.280 "uuid": "8ce4e51e-4151-58be-80c1-af41a386ad53", 00:18:29.280 "is_configured": true, 00:18:29.280 "data_offset": 0, 00:18:29.280 "data_size": 65536 00:18:29.280 } 00:18:29.280 ] 00:18:29.280 }' 00:18:29.280 16:32:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.280 16:32:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.539 16:32:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:29.539 16:32:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.539 16:32:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.539 [2024-11-05 16:32:42.622204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:29.798 [2024-11-05 16:32:42.640453] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:18:29.798 16:32:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.798 16:32:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:29.798 [2024-11-05 16:32:42.650519] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:30.737 16:32:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:30.737 16:32:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:30.737 16:32:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:30.737 16:32:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:30.737 16:32:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:30.737 16:32:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.737 16:32:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.737 16:32:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.737 16:32:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.737 16:32:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.737 16:32:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:30.737 "name": "raid_bdev1", 00:18:30.737 "uuid": "f1e5e1ab-56d4-4fdf-9b2f-011641ce1b68", 00:18:30.737 "strip_size_kb": 64, 00:18:30.737 "state": "online", 00:18:30.737 "raid_level": "raid5f", 00:18:30.737 "superblock": false, 00:18:30.737 "num_base_bdevs": 4, 00:18:30.737 "num_base_bdevs_discovered": 4, 00:18:30.737 "num_base_bdevs_operational": 4, 00:18:30.737 "process": { 00:18:30.737 "type": "rebuild", 00:18:30.737 "target": "spare", 00:18:30.737 "progress": { 00:18:30.737 "blocks": 19200, 00:18:30.737 "percent": 9 00:18:30.737 } 00:18:30.737 }, 00:18:30.737 "base_bdevs_list": [ 00:18:30.737 { 00:18:30.737 "name": "spare", 00:18:30.737 "uuid": "843f0508-5487-556e-a84a-c2f8fb95507f", 00:18:30.737 "is_configured": true, 00:18:30.737 "data_offset": 0, 00:18:30.737 "data_size": 65536 00:18:30.737 }, 00:18:30.737 { 00:18:30.737 "name": "BaseBdev2", 00:18:30.737 "uuid": "4ca4cccc-3c72-5021-8084-7d6d7554c368", 00:18:30.737 "is_configured": true, 00:18:30.737 "data_offset": 0, 00:18:30.737 "data_size": 65536 00:18:30.737 }, 00:18:30.737 { 00:18:30.737 "name": "BaseBdev3", 00:18:30.737 "uuid": "1839758a-9607-5ac2-943e-06180632828a", 00:18:30.737 "is_configured": true, 00:18:30.737 "data_offset": 0, 00:18:30.737 "data_size": 65536 00:18:30.737 }, 00:18:30.737 { 00:18:30.737 "name": "BaseBdev4", 00:18:30.737 "uuid": "8ce4e51e-4151-58be-80c1-af41a386ad53", 00:18:30.737 "is_configured": true, 00:18:30.737 "data_offset": 0, 00:18:30.737 "data_size": 65536 00:18:30.737 } 00:18:30.737 ] 00:18:30.737 }' 00:18:30.737 16:32:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:30.737 16:32:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:30.737 16:32:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:30.737 16:32:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:30.737 16:32:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:30.737 16:32:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.737 16:32:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.737 [2024-11-05 16:32:43.805578] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:30.997 [2024-11-05 16:32:43.859283] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:30.997 [2024-11-05 16:32:43.859362] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:30.997 [2024-11-05 16:32:43.859379] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:30.997 [2024-11-05 16:32:43.859390] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:30.997 16:32:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.997 16:32:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:30.997 16:32:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:30.997 16:32:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:30.997 16:32:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:30.997 16:32:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:30.997 16:32:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:30.997 16:32:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.997 16:32:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.997 16:32:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.997 16:32:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.997 16:32:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.997 16:32:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.997 16:32:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.997 16:32:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.998 16:32:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.998 16:32:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.998 "name": "raid_bdev1", 00:18:30.998 "uuid": "f1e5e1ab-56d4-4fdf-9b2f-011641ce1b68", 00:18:30.998 "strip_size_kb": 64, 00:18:30.998 "state": "online", 00:18:30.998 "raid_level": "raid5f", 00:18:30.998 "superblock": false, 00:18:30.998 "num_base_bdevs": 4, 00:18:30.998 "num_base_bdevs_discovered": 3, 00:18:30.998 "num_base_bdevs_operational": 3, 00:18:30.998 "base_bdevs_list": [ 00:18:30.998 { 00:18:30.998 "name": null, 00:18:30.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.998 "is_configured": false, 00:18:30.998 "data_offset": 0, 00:18:30.998 "data_size": 65536 00:18:30.998 }, 00:18:30.998 { 00:18:30.998 "name": "BaseBdev2", 00:18:30.998 "uuid": "4ca4cccc-3c72-5021-8084-7d6d7554c368", 00:18:30.998 "is_configured": true, 00:18:30.998 "data_offset": 0, 00:18:30.998 "data_size": 65536 00:18:30.998 }, 00:18:30.998 { 00:18:30.998 "name": "BaseBdev3", 00:18:30.998 "uuid": "1839758a-9607-5ac2-943e-06180632828a", 00:18:30.998 "is_configured": true, 00:18:30.998 "data_offset": 0, 00:18:30.998 "data_size": 65536 00:18:30.998 }, 00:18:30.998 { 00:18:30.998 "name": "BaseBdev4", 00:18:30.998 "uuid": "8ce4e51e-4151-58be-80c1-af41a386ad53", 00:18:30.998 "is_configured": true, 00:18:30.998 "data_offset": 0, 00:18:30.998 "data_size": 65536 00:18:30.998 } 00:18:30.998 ] 00:18:30.998 }' 00:18:30.998 16:32:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.998 16:32:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.567 16:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:31.567 16:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:31.567 16:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:31.567 16:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:31.567 16:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:31.567 16:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.567 16:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.567 16:32:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.567 16:32:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.567 16:32:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.567 16:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:31.567 "name": "raid_bdev1", 00:18:31.567 "uuid": "f1e5e1ab-56d4-4fdf-9b2f-011641ce1b68", 00:18:31.567 "strip_size_kb": 64, 00:18:31.567 "state": "online", 00:18:31.567 "raid_level": "raid5f", 00:18:31.567 "superblock": false, 00:18:31.567 "num_base_bdevs": 4, 00:18:31.567 "num_base_bdevs_discovered": 3, 00:18:31.567 "num_base_bdevs_operational": 3, 00:18:31.567 "base_bdevs_list": [ 00:18:31.567 { 00:18:31.567 "name": null, 00:18:31.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.567 "is_configured": false, 00:18:31.567 "data_offset": 0, 00:18:31.567 "data_size": 65536 00:18:31.567 }, 00:18:31.567 { 00:18:31.567 "name": "BaseBdev2", 00:18:31.567 "uuid": "4ca4cccc-3c72-5021-8084-7d6d7554c368", 00:18:31.567 "is_configured": true, 00:18:31.567 "data_offset": 0, 00:18:31.567 "data_size": 65536 00:18:31.567 }, 00:18:31.567 { 00:18:31.567 "name": "BaseBdev3", 00:18:31.567 "uuid": "1839758a-9607-5ac2-943e-06180632828a", 00:18:31.567 "is_configured": true, 00:18:31.567 "data_offset": 0, 00:18:31.567 "data_size": 65536 00:18:31.567 }, 00:18:31.567 { 00:18:31.567 "name": "BaseBdev4", 00:18:31.567 "uuid": "8ce4e51e-4151-58be-80c1-af41a386ad53", 00:18:31.567 "is_configured": true, 00:18:31.567 "data_offset": 0, 00:18:31.567 "data_size": 65536 00:18:31.567 } 00:18:31.567 ] 00:18:31.567 }' 00:18:31.567 16:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:31.567 16:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:31.567 16:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:31.567 16:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:31.567 16:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:31.567 16:32:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.567 16:32:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.567 [2024-11-05 16:32:44.515394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:31.567 [2024-11-05 16:32:44.532383] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:18:31.567 16:32:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.567 16:32:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:31.567 [2024-11-05 16:32:44.542940] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:32.505 16:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:32.505 16:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:32.505 16:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:32.505 16:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:32.505 16:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:32.505 16:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.505 16:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.505 16:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.505 16:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.505 16:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.505 16:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:32.505 "name": "raid_bdev1", 00:18:32.505 "uuid": "f1e5e1ab-56d4-4fdf-9b2f-011641ce1b68", 00:18:32.505 "strip_size_kb": 64, 00:18:32.505 "state": "online", 00:18:32.505 "raid_level": "raid5f", 00:18:32.505 "superblock": false, 00:18:32.505 "num_base_bdevs": 4, 00:18:32.505 "num_base_bdevs_discovered": 4, 00:18:32.505 "num_base_bdevs_operational": 4, 00:18:32.505 "process": { 00:18:32.505 "type": "rebuild", 00:18:32.505 "target": "spare", 00:18:32.505 "progress": { 00:18:32.505 "blocks": 19200, 00:18:32.505 "percent": 9 00:18:32.505 } 00:18:32.505 }, 00:18:32.505 "base_bdevs_list": [ 00:18:32.505 { 00:18:32.505 "name": "spare", 00:18:32.505 "uuid": "843f0508-5487-556e-a84a-c2f8fb95507f", 00:18:32.505 "is_configured": true, 00:18:32.505 "data_offset": 0, 00:18:32.505 "data_size": 65536 00:18:32.505 }, 00:18:32.505 { 00:18:32.505 "name": "BaseBdev2", 00:18:32.505 "uuid": "4ca4cccc-3c72-5021-8084-7d6d7554c368", 00:18:32.505 "is_configured": true, 00:18:32.505 "data_offset": 0, 00:18:32.505 "data_size": 65536 00:18:32.505 }, 00:18:32.505 { 00:18:32.505 "name": "BaseBdev3", 00:18:32.505 "uuid": "1839758a-9607-5ac2-943e-06180632828a", 00:18:32.505 "is_configured": true, 00:18:32.505 "data_offset": 0, 00:18:32.505 "data_size": 65536 00:18:32.505 }, 00:18:32.505 { 00:18:32.505 "name": "BaseBdev4", 00:18:32.505 "uuid": "8ce4e51e-4151-58be-80c1-af41a386ad53", 00:18:32.506 "is_configured": true, 00:18:32.506 "data_offset": 0, 00:18:32.506 "data_size": 65536 00:18:32.506 } 00:18:32.506 ] 00:18:32.506 }' 00:18:32.765 16:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:32.765 16:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:32.765 16:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:32.765 16:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:32.765 16:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:32.765 16:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:32.765 16:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:32.765 16:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=637 00:18:32.765 16:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:32.765 16:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:32.765 16:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:32.765 16:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:32.765 16:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:32.765 16:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:32.765 16:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.765 16:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.765 16:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.765 16:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.765 16:32:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.765 16:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:32.765 "name": "raid_bdev1", 00:18:32.765 "uuid": "f1e5e1ab-56d4-4fdf-9b2f-011641ce1b68", 00:18:32.765 "strip_size_kb": 64, 00:18:32.765 "state": "online", 00:18:32.765 "raid_level": "raid5f", 00:18:32.765 "superblock": false, 00:18:32.765 "num_base_bdevs": 4, 00:18:32.765 "num_base_bdevs_discovered": 4, 00:18:32.765 "num_base_bdevs_operational": 4, 00:18:32.765 "process": { 00:18:32.765 "type": "rebuild", 00:18:32.765 "target": "spare", 00:18:32.765 "progress": { 00:18:32.765 "blocks": 21120, 00:18:32.765 "percent": 10 00:18:32.765 } 00:18:32.765 }, 00:18:32.765 "base_bdevs_list": [ 00:18:32.765 { 00:18:32.765 "name": "spare", 00:18:32.765 "uuid": "843f0508-5487-556e-a84a-c2f8fb95507f", 00:18:32.765 "is_configured": true, 00:18:32.765 "data_offset": 0, 00:18:32.765 "data_size": 65536 00:18:32.765 }, 00:18:32.765 { 00:18:32.765 "name": "BaseBdev2", 00:18:32.765 "uuid": "4ca4cccc-3c72-5021-8084-7d6d7554c368", 00:18:32.765 "is_configured": true, 00:18:32.765 "data_offset": 0, 00:18:32.765 "data_size": 65536 00:18:32.765 }, 00:18:32.765 { 00:18:32.765 "name": "BaseBdev3", 00:18:32.765 "uuid": "1839758a-9607-5ac2-943e-06180632828a", 00:18:32.765 "is_configured": true, 00:18:32.765 "data_offset": 0, 00:18:32.765 "data_size": 65536 00:18:32.765 }, 00:18:32.765 { 00:18:32.765 "name": "BaseBdev4", 00:18:32.765 "uuid": "8ce4e51e-4151-58be-80c1-af41a386ad53", 00:18:32.765 "is_configured": true, 00:18:32.765 "data_offset": 0, 00:18:32.765 "data_size": 65536 00:18:32.765 } 00:18:32.765 ] 00:18:32.765 }' 00:18:32.765 16:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:32.765 16:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:32.765 16:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:32.765 16:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:32.765 16:32:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:33.704 16:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:33.704 16:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:33.704 16:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:33.704 16:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:33.704 16:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:33.704 16:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:33.704 16:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.704 16:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.704 16:32:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.704 16:32:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.963 16:32:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.963 16:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:33.963 "name": "raid_bdev1", 00:18:33.963 "uuid": "f1e5e1ab-56d4-4fdf-9b2f-011641ce1b68", 00:18:33.963 "strip_size_kb": 64, 00:18:33.963 "state": "online", 00:18:33.963 "raid_level": "raid5f", 00:18:33.963 "superblock": false, 00:18:33.963 "num_base_bdevs": 4, 00:18:33.963 "num_base_bdevs_discovered": 4, 00:18:33.963 "num_base_bdevs_operational": 4, 00:18:33.963 "process": { 00:18:33.963 "type": "rebuild", 00:18:33.963 "target": "spare", 00:18:33.963 "progress": { 00:18:33.963 "blocks": 42240, 00:18:33.963 "percent": 21 00:18:33.963 } 00:18:33.963 }, 00:18:33.963 "base_bdevs_list": [ 00:18:33.963 { 00:18:33.963 "name": "spare", 00:18:33.963 "uuid": "843f0508-5487-556e-a84a-c2f8fb95507f", 00:18:33.963 "is_configured": true, 00:18:33.963 "data_offset": 0, 00:18:33.963 "data_size": 65536 00:18:33.963 }, 00:18:33.963 { 00:18:33.963 "name": "BaseBdev2", 00:18:33.963 "uuid": "4ca4cccc-3c72-5021-8084-7d6d7554c368", 00:18:33.963 "is_configured": true, 00:18:33.963 "data_offset": 0, 00:18:33.963 "data_size": 65536 00:18:33.963 }, 00:18:33.963 { 00:18:33.963 "name": "BaseBdev3", 00:18:33.963 "uuid": "1839758a-9607-5ac2-943e-06180632828a", 00:18:33.963 "is_configured": true, 00:18:33.963 "data_offset": 0, 00:18:33.963 "data_size": 65536 00:18:33.963 }, 00:18:33.963 { 00:18:33.963 "name": "BaseBdev4", 00:18:33.963 "uuid": "8ce4e51e-4151-58be-80c1-af41a386ad53", 00:18:33.963 "is_configured": true, 00:18:33.963 "data_offset": 0, 00:18:33.963 "data_size": 65536 00:18:33.963 } 00:18:33.963 ] 00:18:33.963 }' 00:18:33.963 16:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:33.963 16:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:33.963 16:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:33.963 16:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:33.964 16:32:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:34.913 16:32:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:34.913 16:32:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:34.913 16:32:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:34.913 16:32:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:34.913 16:32:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:34.913 16:32:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:34.913 16:32:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.913 16:32:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.913 16:32:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.913 16:32:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.913 16:32:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.913 16:32:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:34.913 "name": "raid_bdev1", 00:18:34.913 "uuid": "f1e5e1ab-56d4-4fdf-9b2f-011641ce1b68", 00:18:34.913 "strip_size_kb": 64, 00:18:34.913 "state": "online", 00:18:34.913 "raid_level": "raid5f", 00:18:34.913 "superblock": false, 00:18:34.913 "num_base_bdevs": 4, 00:18:34.913 "num_base_bdevs_discovered": 4, 00:18:34.913 "num_base_bdevs_operational": 4, 00:18:34.913 "process": { 00:18:34.913 "type": "rebuild", 00:18:34.913 "target": "spare", 00:18:34.913 "progress": { 00:18:34.913 "blocks": 63360, 00:18:34.913 "percent": 32 00:18:34.913 } 00:18:34.913 }, 00:18:34.913 "base_bdevs_list": [ 00:18:34.913 { 00:18:34.913 "name": "spare", 00:18:34.913 "uuid": "843f0508-5487-556e-a84a-c2f8fb95507f", 00:18:34.913 "is_configured": true, 00:18:34.913 "data_offset": 0, 00:18:34.913 "data_size": 65536 00:18:34.913 }, 00:18:34.913 { 00:18:34.913 "name": "BaseBdev2", 00:18:34.913 "uuid": "4ca4cccc-3c72-5021-8084-7d6d7554c368", 00:18:34.913 "is_configured": true, 00:18:34.913 "data_offset": 0, 00:18:34.913 "data_size": 65536 00:18:34.913 }, 00:18:34.913 { 00:18:34.913 "name": "BaseBdev3", 00:18:34.913 "uuid": "1839758a-9607-5ac2-943e-06180632828a", 00:18:34.913 "is_configured": true, 00:18:34.913 "data_offset": 0, 00:18:34.913 "data_size": 65536 00:18:34.913 }, 00:18:34.913 { 00:18:34.913 "name": "BaseBdev4", 00:18:34.913 "uuid": "8ce4e51e-4151-58be-80c1-af41a386ad53", 00:18:34.913 "is_configured": true, 00:18:34.913 "data_offset": 0, 00:18:34.913 "data_size": 65536 00:18:34.913 } 00:18:34.913 ] 00:18:34.913 }' 00:18:34.913 16:32:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:35.172 16:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:35.172 16:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:35.172 16:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:35.172 16:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:36.110 16:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:36.110 16:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:36.110 16:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:36.110 16:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:36.110 16:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:36.110 16:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:36.110 16:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.110 16:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.110 16:32:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.110 16:32:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.110 16:32:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.110 16:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:36.110 "name": "raid_bdev1", 00:18:36.110 "uuid": "f1e5e1ab-56d4-4fdf-9b2f-011641ce1b68", 00:18:36.110 "strip_size_kb": 64, 00:18:36.110 "state": "online", 00:18:36.110 "raid_level": "raid5f", 00:18:36.110 "superblock": false, 00:18:36.110 "num_base_bdevs": 4, 00:18:36.110 "num_base_bdevs_discovered": 4, 00:18:36.110 "num_base_bdevs_operational": 4, 00:18:36.110 "process": { 00:18:36.110 "type": "rebuild", 00:18:36.110 "target": "spare", 00:18:36.110 "progress": { 00:18:36.110 "blocks": 86400, 00:18:36.110 "percent": 43 00:18:36.110 } 00:18:36.110 }, 00:18:36.110 "base_bdevs_list": [ 00:18:36.110 { 00:18:36.110 "name": "spare", 00:18:36.110 "uuid": "843f0508-5487-556e-a84a-c2f8fb95507f", 00:18:36.110 "is_configured": true, 00:18:36.110 "data_offset": 0, 00:18:36.110 "data_size": 65536 00:18:36.110 }, 00:18:36.110 { 00:18:36.110 "name": "BaseBdev2", 00:18:36.110 "uuid": "4ca4cccc-3c72-5021-8084-7d6d7554c368", 00:18:36.110 "is_configured": true, 00:18:36.110 "data_offset": 0, 00:18:36.110 "data_size": 65536 00:18:36.110 }, 00:18:36.110 { 00:18:36.110 "name": "BaseBdev3", 00:18:36.110 "uuid": "1839758a-9607-5ac2-943e-06180632828a", 00:18:36.110 "is_configured": true, 00:18:36.110 "data_offset": 0, 00:18:36.111 "data_size": 65536 00:18:36.111 }, 00:18:36.111 { 00:18:36.111 "name": "BaseBdev4", 00:18:36.111 "uuid": "8ce4e51e-4151-58be-80c1-af41a386ad53", 00:18:36.111 "is_configured": true, 00:18:36.111 "data_offset": 0, 00:18:36.111 "data_size": 65536 00:18:36.111 } 00:18:36.111 ] 00:18:36.111 }' 00:18:36.111 16:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:36.111 16:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:36.111 16:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:36.370 16:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:36.370 16:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:37.306 16:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:37.306 16:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:37.306 16:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:37.306 16:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:37.306 16:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:37.306 16:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:37.306 16:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.306 16:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.306 16:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.306 16:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.306 16:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.306 16:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:37.306 "name": "raid_bdev1", 00:18:37.306 "uuid": "f1e5e1ab-56d4-4fdf-9b2f-011641ce1b68", 00:18:37.306 "strip_size_kb": 64, 00:18:37.306 "state": "online", 00:18:37.306 "raid_level": "raid5f", 00:18:37.306 "superblock": false, 00:18:37.306 "num_base_bdevs": 4, 00:18:37.306 "num_base_bdevs_discovered": 4, 00:18:37.306 "num_base_bdevs_operational": 4, 00:18:37.306 "process": { 00:18:37.306 "type": "rebuild", 00:18:37.306 "target": "spare", 00:18:37.306 "progress": { 00:18:37.306 "blocks": 107520, 00:18:37.306 "percent": 54 00:18:37.306 } 00:18:37.306 }, 00:18:37.306 "base_bdevs_list": [ 00:18:37.306 { 00:18:37.306 "name": "spare", 00:18:37.306 "uuid": "843f0508-5487-556e-a84a-c2f8fb95507f", 00:18:37.307 "is_configured": true, 00:18:37.307 "data_offset": 0, 00:18:37.307 "data_size": 65536 00:18:37.307 }, 00:18:37.307 { 00:18:37.307 "name": "BaseBdev2", 00:18:37.307 "uuid": "4ca4cccc-3c72-5021-8084-7d6d7554c368", 00:18:37.307 "is_configured": true, 00:18:37.307 "data_offset": 0, 00:18:37.307 "data_size": 65536 00:18:37.307 }, 00:18:37.307 { 00:18:37.307 "name": "BaseBdev3", 00:18:37.307 "uuid": "1839758a-9607-5ac2-943e-06180632828a", 00:18:37.307 "is_configured": true, 00:18:37.307 "data_offset": 0, 00:18:37.307 "data_size": 65536 00:18:37.307 }, 00:18:37.307 { 00:18:37.307 "name": "BaseBdev4", 00:18:37.307 "uuid": "8ce4e51e-4151-58be-80c1-af41a386ad53", 00:18:37.307 "is_configured": true, 00:18:37.307 "data_offset": 0, 00:18:37.307 "data_size": 65536 00:18:37.307 } 00:18:37.307 ] 00:18:37.307 }' 00:18:37.307 16:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:37.307 16:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:37.307 16:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:37.307 16:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:37.307 16:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:38.686 16:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:38.686 16:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:38.686 16:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:38.686 16:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:38.686 16:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:38.686 16:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:38.686 16:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.686 16:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.686 16:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.686 16:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.686 16:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.686 16:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:38.686 "name": "raid_bdev1", 00:18:38.686 "uuid": "f1e5e1ab-56d4-4fdf-9b2f-011641ce1b68", 00:18:38.686 "strip_size_kb": 64, 00:18:38.686 "state": "online", 00:18:38.686 "raid_level": "raid5f", 00:18:38.686 "superblock": false, 00:18:38.686 "num_base_bdevs": 4, 00:18:38.686 "num_base_bdevs_discovered": 4, 00:18:38.686 "num_base_bdevs_operational": 4, 00:18:38.686 "process": { 00:18:38.686 "type": "rebuild", 00:18:38.686 "target": "spare", 00:18:38.686 "progress": { 00:18:38.686 "blocks": 128640, 00:18:38.686 "percent": 65 00:18:38.686 } 00:18:38.686 }, 00:18:38.686 "base_bdevs_list": [ 00:18:38.686 { 00:18:38.686 "name": "spare", 00:18:38.686 "uuid": "843f0508-5487-556e-a84a-c2f8fb95507f", 00:18:38.686 "is_configured": true, 00:18:38.686 "data_offset": 0, 00:18:38.686 "data_size": 65536 00:18:38.686 }, 00:18:38.686 { 00:18:38.686 "name": "BaseBdev2", 00:18:38.686 "uuid": "4ca4cccc-3c72-5021-8084-7d6d7554c368", 00:18:38.686 "is_configured": true, 00:18:38.686 "data_offset": 0, 00:18:38.686 "data_size": 65536 00:18:38.686 }, 00:18:38.686 { 00:18:38.686 "name": "BaseBdev3", 00:18:38.686 "uuid": "1839758a-9607-5ac2-943e-06180632828a", 00:18:38.686 "is_configured": true, 00:18:38.686 "data_offset": 0, 00:18:38.686 "data_size": 65536 00:18:38.686 }, 00:18:38.686 { 00:18:38.686 "name": "BaseBdev4", 00:18:38.686 "uuid": "8ce4e51e-4151-58be-80c1-af41a386ad53", 00:18:38.686 "is_configured": true, 00:18:38.686 "data_offset": 0, 00:18:38.686 "data_size": 65536 00:18:38.686 } 00:18:38.686 ] 00:18:38.686 }' 00:18:38.686 16:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:38.686 16:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:38.686 16:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:38.686 16:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:38.686 16:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:39.623 16:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:39.623 16:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:39.623 16:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:39.623 16:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:39.624 16:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:39.624 16:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:39.624 16:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.624 16:32:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.624 16:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.624 16:32:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.624 16:32:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.624 16:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:39.624 "name": "raid_bdev1", 00:18:39.624 "uuid": "f1e5e1ab-56d4-4fdf-9b2f-011641ce1b68", 00:18:39.624 "strip_size_kb": 64, 00:18:39.624 "state": "online", 00:18:39.624 "raid_level": "raid5f", 00:18:39.624 "superblock": false, 00:18:39.624 "num_base_bdevs": 4, 00:18:39.624 "num_base_bdevs_discovered": 4, 00:18:39.624 "num_base_bdevs_operational": 4, 00:18:39.624 "process": { 00:18:39.624 "type": "rebuild", 00:18:39.624 "target": "spare", 00:18:39.624 "progress": { 00:18:39.624 "blocks": 151680, 00:18:39.624 "percent": 77 00:18:39.624 } 00:18:39.624 }, 00:18:39.624 "base_bdevs_list": [ 00:18:39.624 { 00:18:39.624 "name": "spare", 00:18:39.624 "uuid": "843f0508-5487-556e-a84a-c2f8fb95507f", 00:18:39.624 "is_configured": true, 00:18:39.624 "data_offset": 0, 00:18:39.624 "data_size": 65536 00:18:39.624 }, 00:18:39.624 { 00:18:39.624 "name": "BaseBdev2", 00:18:39.624 "uuid": "4ca4cccc-3c72-5021-8084-7d6d7554c368", 00:18:39.624 "is_configured": true, 00:18:39.624 "data_offset": 0, 00:18:39.624 "data_size": 65536 00:18:39.624 }, 00:18:39.624 { 00:18:39.624 "name": "BaseBdev3", 00:18:39.624 "uuid": "1839758a-9607-5ac2-943e-06180632828a", 00:18:39.624 "is_configured": true, 00:18:39.624 "data_offset": 0, 00:18:39.624 "data_size": 65536 00:18:39.624 }, 00:18:39.624 { 00:18:39.624 "name": "BaseBdev4", 00:18:39.624 "uuid": "8ce4e51e-4151-58be-80c1-af41a386ad53", 00:18:39.624 "is_configured": true, 00:18:39.624 "data_offset": 0, 00:18:39.624 "data_size": 65536 00:18:39.624 } 00:18:39.624 ] 00:18:39.624 }' 00:18:39.624 16:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:39.624 16:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:39.624 16:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.624 16:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:39.624 16:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:41.003 16:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:41.003 16:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:41.003 16:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:41.003 16:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:41.003 16:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:41.003 16:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:41.003 16:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.003 16:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.003 16:32:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.003 16:32:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.003 16:32:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.003 16:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.003 "name": "raid_bdev1", 00:18:41.003 "uuid": "f1e5e1ab-56d4-4fdf-9b2f-011641ce1b68", 00:18:41.003 "strip_size_kb": 64, 00:18:41.003 "state": "online", 00:18:41.003 "raid_level": "raid5f", 00:18:41.003 "superblock": false, 00:18:41.003 "num_base_bdevs": 4, 00:18:41.003 "num_base_bdevs_discovered": 4, 00:18:41.003 "num_base_bdevs_operational": 4, 00:18:41.003 "process": { 00:18:41.003 "type": "rebuild", 00:18:41.003 "target": "spare", 00:18:41.003 "progress": { 00:18:41.003 "blocks": 172800, 00:18:41.003 "percent": 87 00:18:41.003 } 00:18:41.003 }, 00:18:41.003 "base_bdevs_list": [ 00:18:41.003 { 00:18:41.003 "name": "spare", 00:18:41.003 "uuid": "843f0508-5487-556e-a84a-c2f8fb95507f", 00:18:41.003 "is_configured": true, 00:18:41.003 "data_offset": 0, 00:18:41.003 "data_size": 65536 00:18:41.003 }, 00:18:41.003 { 00:18:41.003 "name": "BaseBdev2", 00:18:41.003 "uuid": "4ca4cccc-3c72-5021-8084-7d6d7554c368", 00:18:41.003 "is_configured": true, 00:18:41.003 "data_offset": 0, 00:18:41.003 "data_size": 65536 00:18:41.003 }, 00:18:41.003 { 00:18:41.003 "name": "BaseBdev3", 00:18:41.003 "uuid": "1839758a-9607-5ac2-943e-06180632828a", 00:18:41.003 "is_configured": true, 00:18:41.003 "data_offset": 0, 00:18:41.003 "data_size": 65536 00:18:41.003 }, 00:18:41.003 { 00:18:41.004 "name": "BaseBdev4", 00:18:41.004 "uuid": "8ce4e51e-4151-58be-80c1-af41a386ad53", 00:18:41.004 "is_configured": true, 00:18:41.004 "data_offset": 0, 00:18:41.004 "data_size": 65536 00:18:41.004 } 00:18:41.004 ] 00:18:41.004 }' 00:18:41.004 16:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:41.004 16:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:41.004 16:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.004 16:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:41.004 16:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:41.950 16:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:41.950 16:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:41.950 16:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:41.950 16:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:41.950 16:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:41.950 16:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:41.950 16:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.950 16:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.950 16:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.950 16:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.950 16:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.950 16:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.950 "name": "raid_bdev1", 00:18:41.950 "uuid": "f1e5e1ab-56d4-4fdf-9b2f-011641ce1b68", 00:18:41.950 "strip_size_kb": 64, 00:18:41.950 "state": "online", 00:18:41.950 "raid_level": "raid5f", 00:18:41.950 "superblock": false, 00:18:41.950 "num_base_bdevs": 4, 00:18:41.950 "num_base_bdevs_discovered": 4, 00:18:41.950 "num_base_bdevs_operational": 4, 00:18:41.950 "process": { 00:18:41.950 "type": "rebuild", 00:18:41.950 "target": "spare", 00:18:41.950 "progress": { 00:18:41.950 "blocks": 193920, 00:18:41.950 "percent": 98 00:18:41.950 } 00:18:41.950 }, 00:18:41.950 "base_bdevs_list": [ 00:18:41.950 { 00:18:41.950 "name": "spare", 00:18:41.950 "uuid": "843f0508-5487-556e-a84a-c2f8fb95507f", 00:18:41.950 "is_configured": true, 00:18:41.950 "data_offset": 0, 00:18:41.950 "data_size": 65536 00:18:41.950 }, 00:18:41.950 { 00:18:41.950 "name": "BaseBdev2", 00:18:41.950 "uuid": "4ca4cccc-3c72-5021-8084-7d6d7554c368", 00:18:41.950 "is_configured": true, 00:18:41.950 "data_offset": 0, 00:18:41.950 "data_size": 65536 00:18:41.950 }, 00:18:41.950 { 00:18:41.950 "name": "BaseBdev3", 00:18:41.950 "uuid": "1839758a-9607-5ac2-943e-06180632828a", 00:18:41.950 "is_configured": true, 00:18:41.950 "data_offset": 0, 00:18:41.950 "data_size": 65536 00:18:41.950 }, 00:18:41.950 { 00:18:41.950 "name": "BaseBdev4", 00:18:41.950 "uuid": "8ce4e51e-4151-58be-80c1-af41a386ad53", 00:18:41.950 "is_configured": true, 00:18:41.950 "data_offset": 0, 00:18:41.950 "data_size": 65536 00:18:41.950 } 00:18:41.950 ] 00:18:41.950 }' 00:18:41.950 16:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:41.950 16:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:41.950 16:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.950 [2024-11-05 16:32:54.926215] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:41.950 [2024-11-05 16:32:54.926308] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:41.950 [2024-11-05 16:32:54.926367] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:41.950 16:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:41.950 16:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:42.934 16:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:42.934 16:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:42.934 16:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.934 16:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:42.934 16:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:42.934 16:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.934 16:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.934 16:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.934 16:32:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.934 16:32:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.934 16:32:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.934 16:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.934 "name": "raid_bdev1", 00:18:42.934 "uuid": "f1e5e1ab-56d4-4fdf-9b2f-011641ce1b68", 00:18:42.934 "strip_size_kb": 64, 00:18:42.934 "state": "online", 00:18:42.934 "raid_level": "raid5f", 00:18:42.934 "superblock": false, 00:18:42.934 "num_base_bdevs": 4, 00:18:42.934 "num_base_bdevs_discovered": 4, 00:18:42.934 "num_base_bdevs_operational": 4, 00:18:42.934 "base_bdevs_list": [ 00:18:42.934 { 00:18:42.934 "name": "spare", 00:18:42.934 "uuid": "843f0508-5487-556e-a84a-c2f8fb95507f", 00:18:42.934 "is_configured": true, 00:18:42.934 "data_offset": 0, 00:18:42.934 "data_size": 65536 00:18:42.934 }, 00:18:42.934 { 00:18:42.934 "name": "BaseBdev2", 00:18:42.934 "uuid": "4ca4cccc-3c72-5021-8084-7d6d7554c368", 00:18:42.934 "is_configured": true, 00:18:42.934 "data_offset": 0, 00:18:42.934 "data_size": 65536 00:18:42.934 }, 00:18:42.934 { 00:18:42.934 "name": "BaseBdev3", 00:18:42.934 "uuid": "1839758a-9607-5ac2-943e-06180632828a", 00:18:42.934 "is_configured": true, 00:18:42.934 "data_offset": 0, 00:18:42.934 "data_size": 65536 00:18:42.934 }, 00:18:42.934 { 00:18:42.934 "name": "BaseBdev4", 00:18:42.934 "uuid": "8ce4e51e-4151-58be-80c1-af41a386ad53", 00:18:42.934 "is_configured": true, 00:18:42.934 "data_offset": 0, 00:18:42.934 "data_size": 65536 00:18:42.934 } 00:18:42.934 ] 00:18:42.934 }' 00:18:42.934 16:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.193 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:43.193 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.193 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:43.193 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:18:43.193 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:43.193 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.193 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:43.193 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:43.193 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.193 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.193 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.193 16:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.193 16:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.193 16:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.193 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.193 "name": "raid_bdev1", 00:18:43.193 "uuid": "f1e5e1ab-56d4-4fdf-9b2f-011641ce1b68", 00:18:43.193 "strip_size_kb": 64, 00:18:43.193 "state": "online", 00:18:43.193 "raid_level": "raid5f", 00:18:43.193 "superblock": false, 00:18:43.193 "num_base_bdevs": 4, 00:18:43.193 "num_base_bdevs_discovered": 4, 00:18:43.193 "num_base_bdevs_operational": 4, 00:18:43.193 "base_bdevs_list": [ 00:18:43.193 { 00:18:43.193 "name": "spare", 00:18:43.193 "uuid": "843f0508-5487-556e-a84a-c2f8fb95507f", 00:18:43.193 "is_configured": true, 00:18:43.193 "data_offset": 0, 00:18:43.193 "data_size": 65536 00:18:43.193 }, 00:18:43.193 { 00:18:43.193 "name": "BaseBdev2", 00:18:43.193 "uuid": "4ca4cccc-3c72-5021-8084-7d6d7554c368", 00:18:43.193 "is_configured": true, 00:18:43.193 "data_offset": 0, 00:18:43.193 "data_size": 65536 00:18:43.193 }, 00:18:43.193 { 00:18:43.193 "name": "BaseBdev3", 00:18:43.193 "uuid": "1839758a-9607-5ac2-943e-06180632828a", 00:18:43.193 "is_configured": true, 00:18:43.193 "data_offset": 0, 00:18:43.193 "data_size": 65536 00:18:43.193 }, 00:18:43.193 { 00:18:43.193 "name": "BaseBdev4", 00:18:43.193 "uuid": "8ce4e51e-4151-58be-80c1-af41a386ad53", 00:18:43.193 "is_configured": true, 00:18:43.193 "data_offset": 0, 00:18:43.193 "data_size": 65536 00:18:43.193 } 00:18:43.193 ] 00:18:43.193 }' 00:18:43.193 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.193 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:43.193 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.193 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:43.193 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:43.193 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:43.193 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:43.193 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:43.193 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:43.193 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:43.193 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.193 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.193 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.193 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.193 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.193 16:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.193 16:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.193 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.194 16:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.194 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.194 "name": "raid_bdev1", 00:18:43.194 "uuid": "f1e5e1ab-56d4-4fdf-9b2f-011641ce1b68", 00:18:43.194 "strip_size_kb": 64, 00:18:43.194 "state": "online", 00:18:43.194 "raid_level": "raid5f", 00:18:43.194 "superblock": false, 00:18:43.194 "num_base_bdevs": 4, 00:18:43.194 "num_base_bdevs_discovered": 4, 00:18:43.194 "num_base_bdevs_operational": 4, 00:18:43.194 "base_bdevs_list": [ 00:18:43.194 { 00:18:43.194 "name": "spare", 00:18:43.194 "uuid": "843f0508-5487-556e-a84a-c2f8fb95507f", 00:18:43.194 "is_configured": true, 00:18:43.194 "data_offset": 0, 00:18:43.194 "data_size": 65536 00:18:43.194 }, 00:18:43.194 { 00:18:43.194 "name": "BaseBdev2", 00:18:43.194 "uuid": "4ca4cccc-3c72-5021-8084-7d6d7554c368", 00:18:43.194 "is_configured": true, 00:18:43.194 "data_offset": 0, 00:18:43.194 "data_size": 65536 00:18:43.194 }, 00:18:43.194 { 00:18:43.194 "name": "BaseBdev3", 00:18:43.194 "uuid": "1839758a-9607-5ac2-943e-06180632828a", 00:18:43.194 "is_configured": true, 00:18:43.194 "data_offset": 0, 00:18:43.194 "data_size": 65536 00:18:43.194 }, 00:18:43.194 { 00:18:43.194 "name": "BaseBdev4", 00:18:43.194 "uuid": "8ce4e51e-4151-58be-80c1-af41a386ad53", 00:18:43.194 "is_configured": true, 00:18:43.194 "data_offset": 0, 00:18:43.194 "data_size": 65536 00:18:43.194 } 00:18:43.194 ] 00:18:43.194 }' 00:18:43.194 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.194 16:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.762 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:43.762 16:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.762 16:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.762 [2024-11-05 16:32:56.674456] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:43.762 [2024-11-05 16:32:56.674618] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:43.762 [2024-11-05 16:32:56.674767] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:43.762 [2024-11-05 16:32:56.674941] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:43.762 [2024-11-05 16:32:56.675011] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:43.762 16:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.762 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.762 16:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.762 16:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.762 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:18:43.762 16:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.762 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:43.762 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:43.762 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:43.762 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:43.762 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:43.762 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:43.762 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:43.762 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:43.762 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:43.762 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:43.762 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:43.762 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:43.762 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:44.021 /dev/nbd0 00:18:44.021 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:44.021 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:44.021 16:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:44.021 16:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:18:44.021 16:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:44.021 16:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:44.021 16:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:44.021 16:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:18:44.021 16:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:44.021 16:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:44.021 16:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:44.021 1+0 records in 00:18:44.021 1+0 records out 00:18:44.021 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281525 s, 14.5 MB/s 00:18:44.021 16:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:44.021 16:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:18:44.021 16:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:44.021 16:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:44.021 16:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:18:44.021 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:44.021 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:44.021 16:32:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:44.279 /dev/nbd1 00:18:44.279 16:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:44.279 16:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:44.279 16:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:18:44.279 16:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:18:44.279 16:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:44.279 16:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:44.279 16:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:18:44.279 16:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:18:44.279 16:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:44.279 16:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:44.279 16:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:44.279 1+0 records in 00:18:44.279 1+0 records out 00:18:44.279 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358671 s, 11.4 MB/s 00:18:44.279 16:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:44.279 16:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:18:44.279 16:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:44.279 16:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:44.279 16:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:18:44.279 16:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:44.279 16:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:44.279 16:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:44.538 16:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:44.538 16:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:44.538 16:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:44.538 16:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:44.538 16:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:44.538 16:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:44.538 16:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:44.538 16:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:44.797 16:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:44.797 16:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:44.797 16:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:44.797 16:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:44.797 16:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:44.797 16:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:44.797 16:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:44.797 16:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:44.797 16:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:44.797 16:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:44.797 16:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:44.797 16:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:44.797 16:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:44.797 16:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:44.797 16:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:44.797 16:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:44.797 16:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:44.797 16:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:44.797 16:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84958 00:18:44.797 16:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 84958 ']' 00:18:44.797 16:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 84958 00:18:44.797 16:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:18:44.797 16:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:44.797 16:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84958 00:18:44.797 killing process with pid 84958 00:18:44.797 Received shutdown signal, test time was about 60.000000 seconds 00:18:44.797 00:18:44.797 Latency(us) 00:18:44.797 [2024-11-05T16:32:57.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.797 [2024-11-05T16:32:57.885Z] =================================================================================================================== 00:18:44.797 [2024-11-05T16:32:57.885Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:44.797 16:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:44.797 16:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:44.797 16:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84958' 00:18:44.797 16:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # kill 84958 00:18:44.797 [2024-11-05 16:32:57.886578] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:44.797 16:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@976 -- # wait 84958 00:18:45.364 [2024-11-05 16:32:58.374636] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:18:46.738 00:18:46.738 real 0m20.170s 00:18:46.738 user 0m24.127s 00:18:46.738 sys 0m2.246s 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.738 ************************************ 00:18:46.738 END TEST raid5f_rebuild_test 00:18:46.738 ************************************ 00:18:46.738 16:32:59 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:18:46.738 16:32:59 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:18:46.738 16:32:59 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:46.738 16:32:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:46.738 ************************************ 00:18:46.738 START TEST raid5f_rebuild_test_sb 00:18:46.738 ************************************ 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 4 true false true 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85480 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85480 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 85480 ']' 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:46.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:46.738 16:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.738 [2024-11-05 16:32:59.626494] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:18:46.738 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:46.738 Zero copy mechanism will not be used. 00:18:46.738 [2024-11-05 16:32:59.627037] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85480 ] 00:18:46.738 [2024-11-05 16:32:59.802315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.996 [2024-11-05 16:32:59.918057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.254 [2024-11-05 16:33:00.125554] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:47.255 [2024-11-05 16:33:00.125611] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:47.513 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:47.513 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:18:47.513 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:47.513 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:47.513 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.513 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.513 BaseBdev1_malloc 00:18:47.513 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.513 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:47.513 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.513 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.513 [2024-11-05 16:33:00.553724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:47.513 [2024-11-05 16:33:00.553785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.513 [2024-11-05 16:33:00.553808] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:47.513 [2024-11-05 16:33:00.553821] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.513 [2024-11-05 16:33:00.555871] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.513 [2024-11-05 16:33:00.555908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:47.513 BaseBdev1 00:18:47.513 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.513 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:47.513 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:47.513 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.513 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.513 BaseBdev2_malloc 00:18:47.513 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.513 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:47.513 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.513 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.772 [2024-11-05 16:33:00.606363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:47.772 [2024-11-05 16:33:00.606421] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.772 [2024-11-05 16:33:00.606440] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:47.772 [2024-11-05 16:33:00.606454] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.772 [2024-11-05 16:33:00.608591] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.772 [2024-11-05 16:33:00.608628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:47.772 BaseBdev2 00:18:47.772 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.772 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:47.772 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:47.772 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.772 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.772 BaseBdev3_malloc 00:18:47.772 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.772 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:47.772 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.772 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.772 [2024-11-05 16:33:00.673947] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:47.772 [2024-11-05 16:33:00.674019] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.772 [2024-11-05 16:33:00.674041] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:47.772 [2024-11-05 16:33:00.674053] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.772 [2024-11-05 16:33:00.676235] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.772 [2024-11-05 16:33:00.676275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:47.772 BaseBdev3 00:18:47.772 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.772 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:47.772 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:47.772 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.772 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.772 BaseBdev4_malloc 00:18:47.772 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.772 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:47.772 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.772 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.772 [2024-11-05 16:33:00.729401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:47.772 [2024-11-05 16:33:00.729458] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.772 [2024-11-05 16:33:00.729495] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:47.772 [2024-11-05 16:33:00.729505] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.772 [2024-11-05 16:33:00.731539] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.772 [2024-11-05 16:33:00.731582] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:47.772 BaseBdev4 00:18:47.772 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.772 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:47.772 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.772 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.772 spare_malloc 00:18:47.772 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.772 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:47.772 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.772 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.772 spare_delay 00:18:47.772 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.773 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:47.773 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.773 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.773 [2024-11-05 16:33:00.799899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:47.773 [2024-11-05 16:33:00.799955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.773 [2024-11-05 16:33:00.799977] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:47.773 [2024-11-05 16:33:00.799987] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.773 [2024-11-05 16:33:00.802084] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.773 [2024-11-05 16:33:00.802119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:47.773 spare 00:18:47.773 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.773 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:47.773 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.773 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.773 [2024-11-05 16:33:00.811959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:47.773 [2024-11-05 16:33:00.813897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:47.773 [2024-11-05 16:33:00.813961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:47.773 [2024-11-05 16:33:00.814013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:47.773 [2024-11-05 16:33:00.814197] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:47.773 [2024-11-05 16:33:00.814226] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:47.773 [2024-11-05 16:33:00.814464] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:47.773 [2024-11-05 16:33:00.822382] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:47.773 [2024-11-05 16:33:00.822404] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:47.773 [2024-11-05 16:33:00.822624] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:47.773 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.773 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:47.773 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:47.773 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:47.773 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:47.773 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:47.773 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:47.773 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.773 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.773 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.773 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.773 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.773 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.773 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.773 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.773 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.031 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:48.031 "name": "raid_bdev1", 00:18:48.031 "uuid": "ff992a3f-69e2-4d6b-9906-e99ee1a9e9a9", 00:18:48.031 "strip_size_kb": 64, 00:18:48.031 "state": "online", 00:18:48.031 "raid_level": "raid5f", 00:18:48.031 "superblock": true, 00:18:48.031 "num_base_bdevs": 4, 00:18:48.031 "num_base_bdevs_discovered": 4, 00:18:48.031 "num_base_bdevs_operational": 4, 00:18:48.031 "base_bdevs_list": [ 00:18:48.031 { 00:18:48.031 "name": "BaseBdev1", 00:18:48.031 "uuid": "5d9d0e83-3b0a-508b-9146-7beca27ce9cb", 00:18:48.031 "is_configured": true, 00:18:48.031 "data_offset": 2048, 00:18:48.031 "data_size": 63488 00:18:48.031 }, 00:18:48.031 { 00:18:48.031 "name": "BaseBdev2", 00:18:48.031 "uuid": "0e48e87d-001c-5542-8ed9-98b5a1c6718c", 00:18:48.031 "is_configured": true, 00:18:48.031 "data_offset": 2048, 00:18:48.031 "data_size": 63488 00:18:48.031 }, 00:18:48.031 { 00:18:48.031 "name": "BaseBdev3", 00:18:48.031 "uuid": "12362662-771b-575b-84e2-651e9f2a344f", 00:18:48.031 "is_configured": true, 00:18:48.031 "data_offset": 2048, 00:18:48.031 "data_size": 63488 00:18:48.031 }, 00:18:48.031 { 00:18:48.031 "name": "BaseBdev4", 00:18:48.031 "uuid": "9e426153-fbac-5187-8e00-6798c04ca4d8", 00:18:48.031 "is_configured": true, 00:18:48.031 "data_offset": 2048, 00:18:48.031 "data_size": 63488 00:18:48.031 } 00:18:48.031 ] 00:18:48.031 }' 00:18:48.031 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:48.031 16:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.290 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:48.290 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:48.290 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.290 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.290 [2024-11-05 16:33:01.338443] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:48.290 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.290 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:18:48.290 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.290 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.290 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:48.290 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.290 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.549 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:18:48.549 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:48.549 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:48.549 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:48.549 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:48.549 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:48.549 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:48.549 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:48.549 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:48.549 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:48.549 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:48.549 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:48.549 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:48.549 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:48.549 [2024-11-05 16:33:01.621736] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:48.808 /dev/nbd0 00:18:48.808 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:48.808 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:48.808 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:48.808 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:18:48.808 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:48.808 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:48.808 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:48.808 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:18:48.808 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:48.808 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:48.808 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:48.808 1+0 records in 00:18:48.808 1+0 records out 00:18:48.808 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432008 s, 9.5 MB/s 00:18:48.808 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:48.808 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:18:48.808 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:48.808 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:48.808 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:18:48.808 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:48.808 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:48.808 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:48.808 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:18:48.808 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:18:48.808 16:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:18:49.376 496+0 records in 00:18:49.376 496+0 records out 00:18:49.376 97517568 bytes (98 MB, 93 MiB) copied, 0.514433 s, 190 MB/s 00:18:49.376 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:49.376 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:49.376 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:49.376 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:49.376 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:49.376 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:49.376 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:49.376 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:49.376 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:49.376 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:49.376 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:49.376 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:49.376 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:49.376 [2024-11-05 16:33:02.430805] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:49.376 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:49.376 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:49.376 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:49.376 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.376 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.376 [2024-11-05 16:33:02.441780] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:49.376 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.376 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:49.376 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:49.376 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:49.376 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:49.376 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:49.376 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:49.376 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.376 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.376 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.376 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.377 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.377 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.377 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.377 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.634 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.634 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.634 "name": "raid_bdev1", 00:18:49.634 "uuid": "ff992a3f-69e2-4d6b-9906-e99ee1a9e9a9", 00:18:49.634 "strip_size_kb": 64, 00:18:49.634 "state": "online", 00:18:49.634 "raid_level": "raid5f", 00:18:49.634 "superblock": true, 00:18:49.634 "num_base_bdevs": 4, 00:18:49.634 "num_base_bdevs_discovered": 3, 00:18:49.634 "num_base_bdevs_operational": 3, 00:18:49.634 "base_bdevs_list": [ 00:18:49.634 { 00:18:49.634 "name": null, 00:18:49.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.634 "is_configured": false, 00:18:49.634 "data_offset": 0, 00:18:49.634 "data_size": 63488 00:18:49.634 }, 00:18:49.634 { 00:18:49.634 "name": "BaseBdev2", 00:18:49.634 "uuid": "0e48e87d-001c-5542-8ed9-98b5a1c6718c", 00:18:49.634 "is_configured": true, 00:18:49.634 "data_offset": 2048, 00:18:49.634 "data_size": 63488 00:18:49.634 }, 00:18:49.634 { 00:18:49.634 "name": "BaseBdev3", 00:18:49.634 "uuid": "12362662-771b-575b-84e2-651e9f2a344f", 00:18:49.634 "is_configured": true, 00:18:49.634 "data_offset": 2048, 00:18:49.634 "data_size": 63488 00:18:49.634 }, 00:18:49.634 { 00:18:49.634 "name": "BaseBdev4", 00:18:49.634 "uuid": "9e426153-fbac-5187-8e00-6798c04ca4d8", 00:18:49.634 "is_configured": true, 00:18:49.634 "data_offset": 2048, 00:18:49.634 "data_size": 63488 00:18:49.634 } 00:18:49.634 ] 00:18:49.634 }' 00:18:49.634 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.634 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.893 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:49.893 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.893 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.893 [2024-11-05 16:33:02.948960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:49.893 [2024-11-05 16:33:02.966984] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:18:49.893 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.893 16:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:49.893 [2024-11-05 16:33:02.978386] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:51.269 16:33:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:51.269 16:33:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.269 16:33:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:51.269 16:33:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:51.269 16:33:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.269 16:33:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.269 16:33:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.269 16:33:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.269 16:33:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.269 16:33:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.269 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.269 "name": "raid_bdev1", 00:18:51.269 "uuid": "ff992a3f-69e2-4d6b-9906-e99ee1a9e9a9", 00:18:51.269 "strip_size_kb": 64, 00:18:51.269 "state": "online", 00:18:51.269 "raid_level": "raid5f", 00:18:51.269 "superblock": true, 00:18:51.269 "num_base_bdevs": 4, 00:18:51.269 "num_base_bdevs_discovered": 4, 00:18:51.269 "num_base_bdevs_operational": 4, 00:18:51.269 "process": { 00:18:51.269 "type": "rebuild", 00:18:51.269 "target": "spare", 00:18:51.269 "progress": { 00:18:51.269 "blocks": 17280, 00:18:51.269 "percent": 9 00:18:51.269 } 00:18:51.269 }, 00:18:51.269 "base_bdevs_list": [ 00:18:51.269 { 00:18:51.269 "name": "spare", 00:18:51.269 "uuid": "0af184a8-d264-565f-b42e-260f853c6ca1", 00:18:51.269 "is_configured": true, 00:18:51.269 "data_offset": 2048, 00:18:51.269 "data_size": 63488 00:18:51.269 }, 00:18:51.269 { 00:18:51.269 "name": "BaseBdev2", 00:18:51.269 "uuid": "0e48e87d-001c-5542-8ed9-98b5a1c6718c", 00:18:51.269 "is_configured": true, 00:18:51.269 "data_offset": 2048, 00:18:51.269 "data_size": 63488 00:18:51.269 }, 00:18:51.269 { 00:18:51.269 "name": "BaseBdev3", 00:18:51.269 "uuid": "12362662-771b-575b-84e2-651e9f2a344f", 00:18:51.269 "is_configured": true, 00:18:51.269 "data_offset": 2048, 00:18:51.269 "data_size": 63488 00:18:51.269 }, 00:18:51.269 { 00:18:51.269 "name": "BaseBdev4", 00:18:51.269 "uuid": "9e426153-fbac-5187-8e00-6798c04ca4d8", 00:18:51.269 "is_configured": true, 00:18:51.269 "data_offset": 2048, 00:18:51.269 "data_size": 63488 00:18:51.269 } 00:18:51.269 ] 00:18:51.269 }' 00:18:51.269 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.269 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:51.269 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:51.269 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:51.269 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:51.269 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.269 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.269 [2024-11-05 16:33:04.133711] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:51.269 [2024-11-05 16:33:04.187824] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:51.269 [2024-11-05 16:33:04.188007] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:51.269 [2024-11-05 16:33:04.188047] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:51.269 [2024-11-05 16:33:04.188071] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:51.269 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.269 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:51.269 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:51.269 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:51.269 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:51.269 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:51.269 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:51.269 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.269 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.269 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.269 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.269 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.269 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.269 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.269 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.269 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.269 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.269 "name": "raid_bdev1", 00:18:51.269 "uuid": "ff992a3f-69e2-4d6b-9906-e99ee1a9e9a9", 00:18:51.269 "strip_size_kb": 64, 00:18:51.269 "state": "online", 00:18:51.269 "raid_level": "raid5f", 00:18:51.269 "superblock": true, 00:18:51.269 "num_base_bdevs": 4, 00:18:51.269 "num_base_bdevs_discovered": 3, 00:18:51.269 "num_base_bdevs_operational": 3, 00:18:51.269 "base_bdevs_list": [ 00:18:51.269 { 00:18:51.269 "name": null, 00:18:51.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.269 "is_configured": false, 00:18:51.269 "data_offset": 0, 00:18:51.269 "data_size": 63488 00:18:51.269 }, 00:18:51.269 { 00:18:51.269 "name": "BaseBdev2", 00:18:51.269 "uuid": "0e48e87d-001c-5542-8ed9-98b5a1c6718c", 00:18:51.269 "is_configured": true, 00:18:51.269 "data_offset": 2048, 00:18:51.269 "data_size": 63488 00:18:51.269 }, 00:18:51.269 { 00:18:51.269 "name": "BaseBdev3", 00:18:51.269 "uuid": "12362662-771b-575b-84e2-651e9f2a344f", 00:18:51.269 "is_configured": true, 00:18:51.269 "data_offset": 2048, 00:18:51.269 "data_size": 63488 00:18:51.269 }, 00:18:51.269 { 00:18:51.269 "name": "BaseBdev4", 00:18:51.269 "uuid": "9e426153-fbac-5187-8e00-6798c04ca4d8", 00:18:51.269 "is_configured": true, 00:18:51.269 "data_offset": 2048, 00:18:51.269 "data_size": 63488 00:18:51.269 } 00:18:51.269 ] 00:18:51.269 }' 00:18:51.269 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.269 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.835 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:51.835 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.835 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:51.835 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:51.835 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.835 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.835 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.835 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.835 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.835 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.835 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.835 "name": "raid_bdev1", 00:18:51.835 "uuid": "ff992a3f-69e2-4d6b-9906-e99ee1a9e9a9", 00:18:51.835 "strip_size_kb": 64, 00:18:51.835 "state": "online", 00:18:51.835 "raid_level": "raid5f", 00:18:51.835 "superblock": true, 00:18:51.835 "num_base_bdevs": 4, 00:18:51.835 "num_base_bdevs_discovered": 3, 00:18:51.835 "num_base_bdevs_operational": 3, 00:18:51.835 "base_bdevs_list": [ 00:18:51.835 { 00:18:51.835 "name": null, 00:18:51.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.836 "is_configured": false, 00:18:51.836 "data_offset": 0, 00:18:51.836 "data_size": 63488 00:18:51.836 }, 00:18:51.836 { 00:18:51.836 "name": "BaseBdev2", 00:18:51.836 "uuid": "0e48e87d-001c-5542-8ed9-98b5a1c6718c", 00:18:51.836 "is_configured": true, 00:18:51.836 "data_offset": 2048, 00:18:51.836 "data_size": 63488 00:18:51.836 }, 00:18:51.836 { 00:18:51.836 "name": "BaseBdev3", 00:18:51.836 "uuid": "12362662-771b-575b-84e2-651e9f2a344f", 00:18:51.836 "is_configured": true, 00:18:51.836 "data_offset": 2048, 00:18:51.836 "data_size": 63488 00:18:51.836 }, 00:18:51.836 { 00:18:51.836 "name": "BaseBdev4", 00:18:51.836 "uuid": "9e426153-fbac-5187-8e00-6798c04ca4d8", 00:18:51.836 "is_configured": true, 00:18:51.836 "data_offset": 2048, 00:18:51.836 "data_size": 63488 00:18:51.836 } 00:18:51.836 ] 00:18:51.836 }' 00:18:51.836 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.836 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:51.836 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:51.836 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:51.836 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:51.836 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.836 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.836 [2024-11-05 16:33:04.793912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:51.836 [2024-11-05 16:33:04.811007] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:18:51.836 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.836 16:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:51.836 [2024-11-05 16:33:04.821955] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:52.772 16:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:52.772 16:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:52.772 16:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:52.772 16:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:52.772 16:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.772 16:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.772 16:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.772 16:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.772 16:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.772 16:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.032 16:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:53.032 "name": "raid_bdev1", 00:18:53.032 "uuid": "ff992a3f-69e2-4d6b-9906-e99ee1a9e9a9", 00:18:53.033 "strip_size_kb": 64, 00:18:53.033 "state": "online", 00:18:53.033 "raid_level": "raid5f", 00:18:53.033 "superblock": true, 00:18:53.033 "num_base_bdevs": 4, 00:18:53.033 "num_base_bdevs_discovered": 4, 00:18:53.033 "num_base_bdevs_operational": 4, 00:18:53.033 "process": { 00:18:53.033 "type": "rebuild", 00:18:53.033 "target": "spare", 00:18:53.033 "progress": { 00:18:53.033 "blocks": 17280, 00:18:53.033 "percent": 9 00:18:53.033 } 00:18:53.033 }, 00:18:53.033 "base_bdevs_list": [ 00:18:53.033 { 00:18:53.033 "name": "spare", 00:18:53.033 "uuid": "0af184a8-d264-565f-b42e-260f853c6ca1", 00:18:53.033 "is_configured": true, 00:18:53.033 "data_offset": 2048, 00:18:53.033 "data_size": 63488 00:18:53.033 }, 00:18:53.033 { 00:18:53.033 "name": "BaseBdev2", 00:18:53.033 "uuid": "0e48e87d-001c-5542-8ed9-98b5a1c6718c", 00:18:53.033 "is_configured": true, 00:18:53.033 "data_offset": 2048, 00:18:53.033 "data_size": 63488 00:18:53.033 }, 00:18:53.033 { 00:18:53.033 "name": "BaseBdev3", 00:18:53.033 "uuid": "12362662-771b-575b-84e2-651e9f2a344f", 00:18:53.033 "is_configured": true, 00:18:53.033 "data_offset": 2048, 00:18:53.033 "data_size": 63488 00:18:53.033 }, 00:18:53.033 { 00:18:53.033 "name": "BaseBdev4", 00:18:53.033 "uuid": "9e426153-fbac-5187-8e00-6798c04ca4d8", 00:18:53.033 "is_configured": true, 00:18:53.033 "data_offset": 2048, 00:18:53.033 "data_size": 63488 00:18:53.033 } 00:18:53.033 ] 00:18:53.033 }' 00:18:53.033 16:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:53.033 16:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:53.033 16:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:53.033 16:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:53.033 16:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:53.033 16:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:53.033 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:53.033 16:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:53.033 16:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:53.033 16:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=657 00:18:53.033 16:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:53.033 16:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:53.033 16:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:53.033 16:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:53.033 16:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:53.033 16:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:53.033 16:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.033 16:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.033 16:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.033 16:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.033 16:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.033 16:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:53.033 "name": "raid_bdev1", 00:18:53.033 "uuid": "ff992a3f-69e2-4d6b-9906-e99ee1a9e9a9", 00:18:53.033 "strip_size_kb": 64, 00:18:53.033 "state": "online", 00:18:53.033 "raid_level": "raid5f", 00:18:53.033 "superblock": true, 00:18:53.033 "num_base_bdevs": 4, 00:18:53.033 "num_base_bdevs_discovered": 4, 00:18:53.033 "num_base_bdevs_operational": 4, 00:18:53.033 "process": { 00:18:53.033 "type": "rebuild", 00:18:53.033 "target": "spare", 00:18:53.033 "progress": { 00:18:53.033 "blocks": 21120, 00:18:53.033 "percent": 11 00:18:53.033 } 00:18:53.033 }, 00:18:53.033 "base_bdevs_list": [ 00:18:53.033 { 00:18:53.033 "name": "spare", 00:18:53.033 "uuid": "0af184a8-d264-565f-b42e-260f853c6ca1", 00:18:53.033 "is_configured": true, 00:18:53.033 "data_offset": 2048, 00:18:53.033 "data_size": 63488 00:18:53.033 }, 00:18:53.033 { 00:18:53.033 "name": "BaseBdev2", 00:18:53.033 "uuid": "0e48e87d-001c-5542-8ed9-98b5a1c6718c", 00:18:53.033 "is_configured": true, 00:18:53.033 "data_offset": 2048, 00:18:53.033 "data_size": 63488 00:18:53.033 }, 00:18:53.033 { 00:18:53.033 "name": "BaseBdev3", 00:18:53.033 "uuid": "12362662-771b-575b-84e2-651e9f2a344f", 00:18:53.033 "is_configured": true, 00:18:53.033 "data_offset": 2048, 00:18:53.033 "data_size": 63488 00:18:53.033 }, 00:18:53.033 { 00:18:53.033 "name": "BaseBdev4", 00:18:53.033 "uuid": "9e426153-fbac-5187-8e00-6798c04ca4d8", 00:18:53.033 "is_configured": true, 00:18:53.033 "data_offset": 2048, 00:18:53.033 "data_size": 63488 00:18:53.033 } 00:18:53.033 ] 00:18:53.033 }' 00:18:53.033 16:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:53.033 16:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:53.033 16:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:53.033 16:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:53.033 16:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:54.408 16:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:54.408 16:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:54.408 16:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:54.408 16:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:54.408 16:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:54.408 16:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:54.408 16:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.408 16:33:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.408 16:33:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.408 16:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.408 16:33:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.408 16:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:54.408 "name": "raid_bdev1", 00:18:54.408 "uuid": "ff992a3f-69e2-4d6b-9906-e99ee1a9e9a9", 00:18:54.408 "strip_size_kb": 64, 00:18:54.408 "state": "online", 00:18:54.408 "raid_level": "raid5f", 00:18:54.408 "superblock": true, 00:18:54.408 "num_base_bdevs": 4, 00:18:54.408 "num_base_bdevs_discovered": 4, 00:18:54.408 "num_base_bdevs_operational": 4, 00:18:54.408 "process": { 00:18:54.408 "type": "rebuild", 00:18:54.408 "target": "spare", 00:18:54.408 "progress": { 00:18:54.408 "blocks": 42240, 00:18:54.408 "percent": 22 00:18:54.408 } 00:18:54.408 }, 00:18:54.408 "base_bdevs_list": [ 00:18:54.408 { 00:18:54.408 "name": "spare", 00:18:54.408 "uuid": "0af184a8-d264-565f-b42e-260f853c6ca1", 00:18:54.408 "is_configured": true, 00:18:54.408 "data_offset": 2048, 00:18:54.408 "data_size": 63488 00:18:54.408 }, 00:18:54.408 { 00:18:54.408 "name": "BaseBdev2", 00:18:54.408 "uuid": "0e48e87d-001c-5542-8ed9-98b5a1c6718c", 00:18:54.408 "is_configured": true, 00:18:54.408 "data_offset": 2048, 00:18:54.408 "data_size": 63488 00:18:54.408 }, 00:18:54.408 { 00:18:54.408 "name": "BaseBdev3", 00:18:54.408 "uuid": "12362662-771b-575b-84e2-651e9f2a344f", 00:18:54.408 "is_configured": true, 00:18:54.408 "data_offset": 2048, 00:18:54.408 "data_size": 63488 00:18:54.408 }, 00:18:54.408 { 00:18:54.408 "name": "BaseBdev4", 00:18:54.408 "uuid": "9e426153-fbac-5187-8e00-6798c04ca4d8", 00:18:54.408 "is_configured": true, 00:18:54.408 "data_offset": 2048, 00:18:54.408 "data_size": 63488 00:18:54.408 } 00:18:54.408 ] 00:18:54.408 }' 00:18:54.408 16:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:54.408 16:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:54.408 16:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:54.408 16:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:54.408 16:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:55.343 16:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:55.343 16:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:55.343 16:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:55.343 16:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:55.343 16:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:55.343 16:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:55.343 16:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.343 16:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.343 16:33:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.343 16:33:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.343 16:33:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.343 16:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:55.343 "name": "raid_bdev1", 00:18:55.343 "uuid": "ff992a3f-69e2-4d6b-9906-e99ee1a9e9a9", 00:18:55.343 "strip_size_kb": 64, 00:18:55.343 "state": "online", 00:18:55.343 "raid_level": "raid5f", 00:18:55.343 "superblock": true, 00:18:55.343 "num_base_bdevs": 4, 00:18:55.343 "num_base_bdevs_discovered": 4, 00:18:55.343 "num_base_bdevs_operational": 4, 00:18:55.343 "process": { 00:18:55.343 "type": "rebuild", 00:18:55.343 "target": "spare", 00:18:55.343 "progress": { 00:18:55.343 "blocks": 65280, 00:18:55.343 "percent": 34 00:18:55.343 } 00:18:55.343 }, 00:18:55.343 "base_bdevs_list": [ 00:18:55.343 { 00:18:55.343 "name": "spare", 00:18:55.343 "uuid": "0af184a8-d264-565f-b42e-260f853c6ca1", 00:18:55.343 "is_configured": true, 00:18:55.343 "data_offset": 2048, 00:18:55.343 "data_size": 63488 00:18:55.343 }, 00:18:55.343 { 00:18:55.343 "name": "BaseBdev2", 00:18:55.343 "uuid": "0e48e87d-001c-5542-8ed9-98b5a1c6718c", 00:18:55.343 "is_configured": true, 00:18:55.343 "data_offset": 2048, 00:18:55.343 "data_size": 63488 00:18:55.343 }, 00:18:55.343 { 00:18:55.343 "name": "BaseBdev3", 00:18:55.343 "uuid": "12362662-771b-575b-84e2-651e9f2a344f", 00:18:55.343 "is_configured": true, 00:18:55.343 "data_offset": 2048, 00:18:55.343 "data_size": 63488 00:18:55.343 }, 00:18:55.343 { 00:18:55.343 "name": "BaseBdev4", 00:18:55.343 "uuid": "9e426153-fbac-5187-8e00-6798c04ca4d8", 00:18:55.343 "is_configured": true, 00:18:55.343 "data_offset": 2048, 00:18:55.343 "data_size": 63488 00:18:55.343 } 00:18:55.343 ] 00:18:55.343 }' 00:18:55.344 16:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:55.344 16:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:55.344 16:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:55.344 16:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:55.344 16:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:56.724 16:33:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:56.725 16:33:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:56.725 16:33:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:56.725 16:33:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:56.725 16:33:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:56.725 16:33:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:56.725 16:33:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.725 16:33:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.725 16:33:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.725 16:33:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.725 16:33:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.725 16:33:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:56.725 "name": "raid_bdev1", 00:18:56.725 "uuid": "ff992a3f-69e2-4d6b-9906-e99ee1a9e9a9", 00:18:56.725 "strip_size_kb": 64, 00:18:56.725 "state": "online", 00:18:56.725 "raid_level": "raid5f", 00:18:56.725 "superblock": true, 00:18:56.725 "num_base_bdevs": 4, 00:18:56.725 "num_base_bdevs_discovered": 4, 00:18:56.725 "num_base_bdevs_operational": 4, 00:18:56.725 "process": { 00:18:56.725 "type": "rebuild", 00:18:56.725 "target": "spare", 00:18:56.725 "progress": { 00:18:56.725 "blocks": 86400, 00:18:56.725 "percent": 45 00:18:56.725 } 00:18:56.725 }, 00:18:56.725 "base_bdevs_list": [ 00:18:56.725 { 00:18:56.725 "name": "spare", 00:18:56.725 "uuid": "0af184a8-d264-565f-b42e-260f853c6ca1", 00:18:56.725 "is_configured": true, 00:18:56.725 "data_offset": 2048, 00:18:56.725 "data_size": 63488 00:18:56.725 }, 00:18:56.725 { 00:18:56.725 "name": "BaseBdev2", 00:18:56.725 "uuid": "0e48e87d-001c-5542-8ed9-98b5a1c6718c", 00:18:56.725 "is_configured": true, 00:18:56.725 "data_offset": 2048, 00:18:56.725 "data_size": 63488 00:18:56.725 }, 00:18:56.725 { 00:18:56.725 "name": "BaseBdev3", 00:18:56.725 "uuid": "12362662-771b-575b-84e2-651e9f2a344f", 00:18:56.725 "is_configured": true, 00:18:56.725 "data_offset": 2048, 00:18:56.725 "data_size": 63488 00:18:56.725 }, 00:18:56.725 { 00:18:56.725 "name": "BaseBdev4", 00:18:56.725 "uuid": "9e426153-fbac-5187-8e00-6798c04ca4d8", 00:18:56.725 "is_configured": true, 00:18:56.725 "data_offset": 2048, 00:18:56.725 "data_size": 63488 00:18:56.726 } 00:18:56.726 ] 00:18:56.726 }' 00:18:56.726 16:33:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:56.726 16:33:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:56.726 16:33:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:56.726 16:33:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:56.726 16:33:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:57.675 16:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:57.675 16:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:57.675 16:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:57.675 16:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:57.675 16:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:57.675 16:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:57.675 16:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.675 16:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.675 16:33:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.675 16:33:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.675 16:33:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.675 16:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:57.675 "name": "raid_bdev1", 00:18:57.675 "uuid": "ff992a3f-69e2-4d6b-9906-e99ee1a9e9a9", 00:18:57.675 "strip_size_kb": 64, 00:18:57.675 "state": "online", 00:18:57.675 "raid_level": "raid5f", 00:18:57.675 "superblock": true, 00:18:57.675 "num_base_bdevs": 4, 00:18:57.675 "num_base_bdevs_discovered": 4, 00:18:57.675 "num_base_bdevs_operational": 4, 00:18:57.675 "process": { 00:18:57.675 "type": "rebuild", 00:18:57.675 "target": "spare", 00:18:57.675 "progress": { 00:18:57.675 "blocks": 107520, 00:18:57.675 "percent": 56 00:18:57.675 } 00:18:57.675 }, 00:18:57.675 "base_bdevs_list": [ 00:18:57.675 { 00:18:57.675 "name": "spare", 00:18:57.675 "uuid": "0af184a8-d264-565f-b42e-260f853c6ca1", 00:18:57.675 "is_configured": true, 00:18:57.675 "data_offset": 2048, 00:18:57.675 "data_size": 63488 00:18:57.675 }, 00:18:57.675 { 00:18:57.675 "name": "BaseBdev2", 00:18:57.675 "uuid": "0e48e87d-001c-5542-8ed9-98b5a1c6718c", 00:18:57.675 "is_configured": true, 00:18:57.675 "data_offset": 2048, 00:18:57.675 "data_size": 63488 00:18:57.675 }, 00:18:57.675 { 00:18:57.675 "name": "BaseBdev3", 00:18:57.675 "uuid": "12362662-771b-575b-84e2-651e9f2a344f", 00:18:57.675 "is_configured": true, 00:18:57.675 "data_offset": 2048, 00:18:57.675 "data_size": 63488 00:18:57.675 }, 00:18:57.675 { 00:18:57.675 "name": "BaseBdev4", 00:18:57.675 "uuid": "9e426153-fbac-5187-8e00-6798c04ca4d8", 00:18:57.675 "is_configured": true, 00:18:57.675 "data_offset": 2048, 00:18:57.675 "data_size": 63488 00:18:57.675 } 00:18:57.675 ] 00:18:57.675 }' 00:18:57.675 16:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:57.675 16:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:57.675 16:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:57.675 16:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:57.675 16:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:58.613 16:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:58.613 16:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:58.613 16:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:58.613 16:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:58.613 16:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:58.613 16:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:58.613 16:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.613 16:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.613 16:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.613 16:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.873 16:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.873 16:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:58.873 "name": "raid_bdev1", 00:18:58.873 "uuid": "ff992a3f-69e2-4d6b-9906-e99ee1a9e9a9", 00:18:58.873 "strip_size_kb": 64, 00:18:58.873 "state": "online", 00:18:58.873 "raid_level": "raid5f", 00:18:58.873 "superblock": true, 00:18:58.873 "num_base_bdevs": 4, 00:18:58.873 "num_base_bdevs_discovered": 4, 00:18:58.873 "num_base_bdevs_operational": 4, 00:18:58.873 "process": { 00:18:58.873 "type": "rebuild", 00:18:58.873 "target": "spare", 00:18:58.873 "progress": { 00:18:58.873 "blocks": 130560, 00:18:58.873 "percent": 68 00:18:58.873 } 00:18:58.874 }, 00:18:58.874 "base_bdevs_list": [ 00:18:58.874 { 00:18:58.874 "name": "spare", 00:18:58.874 "uuid": "0af184a8-d264-565f-b42e-260f853c6ca1", 00:18:58.874 "is_configured": true, 00:18:58.874 "data_offset": 2048, 00:18:58.874 "data_size": 63488 00:18:58.874 }, 00:18:58.874 { 00:18:58.874 "name": "BaseBdev2", 00:18:58.874 "uuid": "0e48e87d-001c-5542-8ed9-98b5a1c6718c", 00:18:58.874 "is_configured": true, 00:18:58.874 "data_offset": 2048, 00:18:58.874 "data_size": 63488 00:18:58.874 }, 00:18:58.874 { 00:18:58.874 "name": "BaseBdev3", 00:18:58.874 "uuid": "12362662-771b-575b-84e2-651e9f2a344f", 00:18:58.874 "is_configured": true, 00:18:58.874 "data_offset": 2048, 00:18:58.874 "data_size": 63488 00:18:58.874 }, 00:18:58.874 { 00:18:58.874 "name": "BaseBdev4", 00:18:58.874 "uuid": "9e426153-fbac-5187-8e00-6798c04ca4d8", 00:18:58.874 "is_configured": true, 00:18:58.874 "data_offset": 2048, 00:18:58.874 "data_size": 63488 00:18:58.874 } 00:18:58.874 ] 00:18:58.874 }' 00:18:58.874 16:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:58.874 16:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:58.874 16:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:58.874 16:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:58.874 16:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:59.811 16:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:59.811 16:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:59.811 16:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:59.811 16:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:59.811 16:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:59.811 16:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:59.811 16:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.811 16:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.811 16:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.811 16:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.811 16:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.811 16:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:59.811 "name": "raid_bdev1", 00:18:59.811 "uuid": "ff992a3f-69e2-4d6b-9906-e99ee1a9e9a9", 00:18:59.811 "strip_size_kb": 64, 00:18:59.811 "state": "online", 00:18:59.811 "raid_level": "raid5f", 00:18:59.811 "superblock": true, 00:18:59.811 "num_base_bdevs": 4, 00:18:59.811 "num_base_bdevs_discovered": 4, 00:18:59.811 "num_base_bdevs_operational": 4, 00:18:59.811 "process": { 00:18:59.811 "type": "rebuild", 00:18:59.811 "target": "spare", 00:18:59.811 "progress": { 00:18:59.811 "blocks": 151680, 00:18:59.811 "percent": 79 00:18:59.811 } 00:18:59.811 }, 00:18:59.811 "base_bdevs_list": [ 00:18:59.811 { 00:18:59.811 "name": "spare", 00:18:59.811 "uuid": "0af184a8-d264-565f-b42e-260f853c6ca1", 00:18:59.811 "is_configured": true, 00:18:59.811 "data_offset": 2048, 00:18:59.811 "data_size": 63488 00:18:59.811 }, 00:18:59.811 { 00:18:59.811 "name": "BaseBdev2", 00:18:59.811 "uuid": "0e48e87d-001c-5542-8ed9-98b5a1c6718c", 00:18:59.811 "is_configured": true, 00:18:59.811 "data_offset": 2048, 00:18:59.811 "data_size": 63488 00:18:59.811 }, 00:18:59.811 { 00:18:59.811 "name": "BaseBdev3", 00:18:59.811 "uuid": "12362662-771b-575b-84e2-651e9f2a344f", 00:18:59.811 "is_configured": true, 00:18:59.811 "data_offset": 2048, 00:18:59.811 "data_size": 63488 00:18:59.811 }, 00:18:59.811 { 00:18:59.811 "name": "BaseBdev4", 00:18:59.811 "uuid": "9e426153-fbac-5187-8e00-6798c04ca4d8", 00:18:59.811 "is_configured": true, 00:18:59.811 "data_offset": 2048, 00:18:59.811 "data_size": 63488 00:18:59.811 } 00:18:59.811 ] 00:18:59.811 }' 00:18:59.811 16:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:00.071 16:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:00.071 16:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:00.071 16:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:00.071 16:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:01.007 16:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:01.007 16:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:01.007 16:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:01.007 16:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:01.007 16:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:01.007 16:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:01.007 16:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.007 16:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.007 16:33:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.007 16:33:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.007 16:33:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.007 16:33:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:01.007 "name": "raid_bdev1", 00:19:01.007 "uuid": "ff992a3f-69e2-4d6b-9906-e99ee1a9e9a9", 00:19:01.007 "strip_size_kb": 64, 00:19:01.007 "state": "online", 00:19:01.007 "raid_level": "raid5f", 00:19:01.007 "superblock": true, 00:19:01.007 "num_base_bdevs": 4, 00:19:01.007 "num_base_bdevs_discovered": 4, 00:19:01.007 "num_base_bdevs_operational": 4, 00:19:01.007 "process": { 00:19:01.007 "type": "rebuild", 00:19:01.007 "target": "spare", 00:19:01.007 "progress": { 00:19:01.007 "blocks": 174720, 00:19:01.007 "percent": 91 00:19:01.007 } 00:19:01.007 }, 00:19:01.007 "base_bdevs_list": [ 00:19:01.007 { 00:19:01.007 "name": "spare", 00:19:01.007 "uuid": "0af184a8-d264-565f-b42e-260f853c6ca1", 00:19:01.007 "is_configured": true, 00:19:01.007 "data_offset": 2048, 00:19:01.007 "data_size": 63488 00:19:01.007 }, 00:19:01.007 { 00:19:01.007 "name": "BaseBdev2", 00:19:01.007 "uuid": "0e48e87d-001c-5542-8ed9-98b5a1c6718c", 00:19:01.007 "is_configured": true, 00:19:01.007 "data_offset": 2048, 00:19:01.007 "data_size": 63488 00:19:01.007 }, 00:19:01.007 { 00:19:01.007 "name": "BaseBdev3", 00:19:01.007 "uuid": "12362662-771b-575b-84e2-651e9f2a344f", 00:19:01.007 "is_configured": true, 00:19:01.007 "data_offset": 2048, 00:19:01.007 "data_size": 63488 00:19:01.007 }, 00:19:01.007 { 00:19:01.007 "name": "BaseBdev4", 00:19:01.007 "uuid": "9e426153-fbac-5187-8e00-6798c04ca4d8", 00:19:01.007 "is_configured": true, 00:19:01.007 "data_offset": 2048, 00:19:01.007 "data_size": 63488 00:19:01.007 } 00:19:01.007 ] 00:19:01.007 }' 00:19:01.007 16:33:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:01.007 16:33:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:01.007 16:33:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:01.265 16:33:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:01.265 16:33:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:01.839 [2024-11-05 16:33:14.890505] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:01.839 [2024-11-05 16:33:14.890666] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:01.839 [2024-11-05 16:33:14.890847] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:02.097 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:02.097 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:02.097 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:02.097 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:02.097 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:02.097 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:02.097 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.097 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.097 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.097 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.097 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.356 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:02.356 "name": "raid_bdev1", 00:19:02.356 "uuid": "ff992a3f-69e2-4d6b-9906-e99ee1a9e9a9", 00:19:02.356 "strip_size_kb": 64, 00:19:02.356 "state": "online", 00:19:02.356 "raid_level": "raid5f", 00:19:02.356 "superblock": true, 00:19:02.356 "num_base_bdevs": 4, 00:19:02.356 "num_base_bdevs_discovered": 4, 00:19:02.356 "num_base_bdevs_operational": 4, 00:19:02.356 "base_bdevs_list": [ 00:19:02.356 { 00:19:02.356 "name": "spare", 00:19:02.356 "uuid": "0af184a8-d264-565f-b42e-260f853c6ca1", 00:19:02.356 "is_configured": true, 00:19:02.356 "data_offset": 2048, 00:19:02.356 "data_size": 63488 00:19:02.356 }, 00:19:02.356 { 00:19:02.356 "name": "BaseBdev2", 00:19:02.356 "uuid": "0e48e87d-001c-5542-8ed9-98b5a1c6718c", 00:19:02.356 "is_configured": true, 00:19:02.356 "data_offset": 2048, 00:19:02.356 "data_size": 63488 00:19:02.356 }, 00:19:02.356 { 00:19:02.356 "name": "BaseBdev3", 00:19:02.356 "uuid": "12362662-771b-575b-84e2-651e9f2a344f", 00:19:02.356 "is_configured": true, 00:19:02.356 "data_offset": 2048, 00:19:02.356 "data_size": 63488 00:19:02.356 }, 00:19:02.356 { 00:19:02.356 "name": "BaseBdev4", 00:19:02.356 "uuid": "9e426153-fbac-5187-8e00-6798c04ca4d8", 00:19:02.356 "is_configured": true, 00:19:02.356 "data_offset": 2048, 00:19:02.356 "data_size": 63488 00:19:02.356 } 00:19:02.356 ] 00:19:02.356 }' 00:19:02.356 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:02.356 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:02.356 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:02.356 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:02.356 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:19:02.356 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:02.356 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:02.356 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:02.356 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:02.356 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:02.356 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.356 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.356 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.356 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.356 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.356 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:02.356 "name": "raid_bdev1", 00:19:02.356 "uuid": "ff992a3f-69e2-4d6b-9906-e99ee1a9e9a9", 00:19:02.356 "strip_size_kb": 64, 00:19:02.356 "state": "online", 00:19:02.356 "raid_level": "raid5f", 00:19:02.356 "superblock": true, 00:19:02.356 "num_base_bdevs": 4, 00:19:02.356 "num_base_bdevs_discovered": 4, 00:19:02.356 "num_base_bdevs_operational": 4, 00:19:02.356 "base_bdevs_list": [ 00:19:02.356 { 00:19:02.356 "name": "spare", 00:19:02.356 "uuid": "0af184a8-d264-565f-b42e-260f853c6ca1", 00:19:02.356 "is_configured": true, 00:19:02.356 "data_offset": 2048, 00:19:02.356 "data_size": 63488 00:19:02.356 }, 00:19:02.356 { 00:19:02.356 "name": "BaseBdev2", 00:19:02.356 "uuid": "0e48e87d-001c-5542-8ed9-98b5a1c6718c", 00:19:02.356 "is_configured": true, 00:19:02.356 "data_offset": 2048, 00:19:02.356 "data_size": 63488 00:19:02.356 }, 00:19:02.356 { 00:19:02.356 "name": "BaseBdev3", 00:19:02.356 "uuid": "12362662-771b-575b-84e2-651e9f2a344f", 00:19:02.356 "is_configured": true, 00:19:02.356 "data_offset": 2048, 00:19:02.356 "data_size": 63488 00:19:02.356 }, 00:19:02.356 { 00:19:02.356 "name": "BaseBdev4", 00:19:02.356 "uuid": "9e426153-fbac-5187-8e00-6798c04ca4d8", 00:19:02.356 "is_configured": true, 00:19:02.356 "data_offset": 2048, 00:19:02.356 "data_size": 63488 00:19:02.356 } 00:19:02.356 ] 00:19:02.356 }' 00:19:02.356 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:02.356 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:02.356 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:02.356 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:02.356 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:02.356 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:02.356 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:02.356 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:02.356 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:02.356 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:02.356 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.356 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.356 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.356 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.356 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.356 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.356 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.356 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.615 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.615 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.615 "name": "raid_bdev1", 00:19:02.615 "uuid": "ff992a3f-69e2-4d6b-9906-e99ee1a9e9a9", 00:19:02.615 "strip_size_kb": 64, 00:19:02.615 "state": "online", 00:19:02.615 "raid_level": "raid5f", 00:19:02.615 "superblock": true, 00:19:02.615 "num_base_bdevs": 4, 00:19:02.615 "num_base_bdevs_discovered": 4, 00:19:02.615 "num_base_bdevs_operational": 4, 00:19:02.615 "base_bdevs_list": [ 00:19:02.615 { 00:19:02.615 "name": "spare", 00:19:02.615 "uuid": "0af184a8-d264-565f-b42e-260f853c6ca1", 00:19:02.615 "is_configured": true, 00:19:02.615 "data_offset": 2048, 00:19:02.615 "data_size": 63488 00:19:02.615 }, 00:19:02.615 { 00:19:02.615 "name": "BaseBdev2", 00:19:02.615 "uuid": "0e48e87d-001c-5542-8ed9-98b5a1c6718c", 00:19:02.615 "is_configured": true, 00:19:02.615 "data_offset": 2048, 00:19:02.615 "data_size": 63488 00:19:02.615 }, 00:19:02.615 { 00:19:02.615 "name": "BaseBdev3", 00:19:02.615 "uuid": "12362662-771b-575b-84e2-651e9f2a344f", 00:19:02.615 "is_configured": true, 00:19:02.615 "data_offset": 2048, 00:19:02.615 "data_size": 63488 00:19:02.615 }, 00:19:02.615 { 00:19:02.615 "name": "BaseBdev4", 00:19:02.615 "uuid": "9e426153-fbac-5187-8e00-6798c04ca4d8", 00:19:02.615 "is_configured": true, 00:19:02.615 "data_offset": 2048, 00:19:02.615 "data_size": 63488 00:19:02.615 } 00:19:02.615 ] 00:19:02.615 }' 00:19:02.615 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.615 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.875 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:02.875 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.875 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.875 [2024-11-05 16:33:15.881562] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:02.875 [2024-11-05 16:33:15.881645] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:02.875 [2024-11-05 16:33:15.881754] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:02.875 [2024-11-05 16:33:15.881889] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:02.875 [2024-11-05 16:33:15.881950] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:02.875 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.875 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.875 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.875 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.875 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:19:02.875 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.875 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:02.875 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:02.875 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:02.875 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:02.875 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:02.875 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:02.875 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:02.875 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:02.875 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:02.875 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:02.875 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:02.875 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:02.875 16:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:03.135 /dev/nbd0 00:19:03.135 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:03.135 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:03.135 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:03.135 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:19:03.135 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:03.135 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:03.135 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:03.135 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:19:03.135 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:03.135 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:03.135 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:03.135 1+0 records in 00:19:03.135 1+0 records out 00:19:03.135 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000527836 s, 7.8 MB/s 00:19:03.135 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:03.135 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:19:03.135 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:03.135 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:03.135 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:19:03.135 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:03.135 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:03.135 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:03.394 /dev/nbd1 00:19:03.394 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:03.394 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:03.394 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:19:03.394 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:19:03.394 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:03.394 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:03.394 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:19:03.394 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:19:03.394 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:03.394 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:03.394 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:03.394 1+0 records in 00:19:03.394 1+0 records out 00:19:03.394 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381498 s, 10.7 MB/s 00:19:03.394 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:03.654 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:19:03.654 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:03.654 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:03.654 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:19:03.654 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:03.654 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:03.654 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:03.654 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:03.654 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:03.654 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:03.654 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:03.654 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:03.654 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:03.654 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:03.914 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:03.914 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:03.914 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:03.914 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:03.914 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:03.914 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:03.914 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:03.914 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:03.914 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:03.914 16:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:04.174 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:04.174 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:04.174 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:04.174 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:04.174 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:04.174 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:04.174 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:04.174 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:04.174 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:04.174 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:04.174 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.174 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.174 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.174 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:04.174 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.174 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.174 [2024-11-05 16:33:17.148082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:04.174 [2024-11-05 16:33:17.148145] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:04.174 [2024-11-05 16:33:17.148176] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:19:04.174 [2024-11-05 16:33:17.148186] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:04.174 [2024-11-05 16:33:17.150817] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:04.174 [2024-11-05 16:33:17.150853] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:04.174 [2024-11-05 16:33:17.150954] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:04.174 [2024-11-05 16:33:17.151006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:04.174 [2024-11-05 16:33:17.151164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:04.174 [2024-11-05 16:33:17.151271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:04.174 [2024-11-05 16:33:17.151345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:04.174 spare 00:19:04.174 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.174 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:04.174 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.174 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.174 [2024-11-05 16:33:17.251254] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:04.174 [2024-11-05 16:33:17.251304] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:04.174 [2024-11-05 16:33:17.251671] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:19:04.174 [2024-11-05 16:33:17.258728] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:04.174 [2024-11-05 16:33:17.258749] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:04.174 [2024-11-05 16:33:17.258968] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:04.434 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.434 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:04.434 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:04.434 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:04.434 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:04.434 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:04.434 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:04.434 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.434 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.434 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.434 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.434 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.434 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.434 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.434 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.434 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.434 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.434 "name": "raid_bdev1", 00:19:04.434 "uuid": "ff992a3f-69e2-4d6b-9906-e99ee1a9e9a9", 00:19:04.434 "strip_size_kb": 64, 00:19:04.434 "state": "online", 00:19:04.434 "raid_level": "raid5f", 00:19:04.434 "superblock": true, 00:19:04.434 "num_base_bdevs": 4, 00:19:04.434 "num_base_bdevs_discovered": 4, 00:19:04.434 "num_base_bdevs_operational": 4, 00:19:04.434 "base_bdevs_list": [ 00:19:04.434 { 00:19:04.434 "name": "spare", 00:19:04.434 "uuid": "0af184a8-d264-565f-b42e-260f853c6ca1", 00:19:04.434 "is_configured": true, 00:19:04.434 "data_offset": 2048, 00:19:04.434 "data_size": 63488 00:19:04.434 }, 00:19:04.434 { 00:19:04.434 "name": "BaseBdev2", 00:19:04.434 "uuid": "0e48e87d-001c-5542-8ed9-98b5a1c6718c", 00:19:04.434 "is_configured": true, 00:19:04.434 "data_offset": 2048, 00:19:04.434 "data_size": 63488 00:19:04.434 }, 00:19:04.434 { 00:19:04.434 "name": "BaseBdev3", 00:19:04.434 "uuid": "12362662-771b-575b-84e2-651e9f2a344f", 00:19:04.434 "is_configured": true, 00:19:04.434 "data_offset": 2048, 00:19:04.434 "data_size": 63488 00:19:04.434 }, 00:19:04.434 { 00:19:04.434 "name": "BaseBdev4", 00:19:04.434 "uuid": "9e426153-fbac-5187-8e00-6798c04ca4d8", 00:19:04.434 "is_configured": true, 00:19:04.434 "data_offset": 2048, 00:19:04.434 "data_size": 63488 00:19:04.434 } 00:19:04.434 ] 00:19:04.434 }' 00:19:04.434 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.434 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.694 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:04.694 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:04.694 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:04.694 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:04.694 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:04.694 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.694 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.694 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.694 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.694 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.694 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:04.694 "name": "raid_bdev1", 00:19:04.694 "uuid": "ff992a3f-69e2-4d6b-9906-e99ee1a9e9a9", 00:19:04.694 "strip_size_kb": 64, 00:19:04.694 "state": "online", 00:19:04.694 "raid_level": "raid5f", 00:19:04.694 "superblock": true, 00:19:04.694 "num_base_bdevs": 4, 00:19:04.694 "num_base_bdevs_discovered": 4, 00:19:04.694 "num_base_bdevs_operational": 4, 00:19:04.694 "base_bdevs_list": [ 00:19:04.694 { 00:19:04.694 "name": "spare", 00:19:04.694 "uuid": "0af184a8-d264-565f-b42e-260f853c6ca1", 00:19:04.694 "is_configured": true, 00:19:04.694 "data_offset": 2048, 00:19:04.694 "data_size": 63488 00:19:04.694 }, 00:19:04.694 { 00:19:04.694 "name": "BaseBdev2", 00:19:04.694 "uuid": "0e48e87d-001c-5542-8ed9-98b5a1c6718c", 00:19:04.694 "is_configured": true, 00:19:04.694 "data_offset": 2048, 00:19:04.694 "data_size": 63488 00:19:04.694 }, 00:19:04.694 { 00:19:04.694 "name": "BaseBdev3", 00:19:04.694 "uuid": "12362662-771b-575b-84e2-651e9f2a344f", 00:19:04.694 "is_configured": true, 00:19:04.694 "data_offset": 2048, 00:19:04.694 "data_size": 63488 00:19:04.694 }, 00:19:04.694 { 00:19:04.694 "name": "BaseBdev4", 00:19:04.694 "uuid": "9e426153-fbac-5187-8e00-6798c04ca4d8", 00:19:04.694 "is_configured": true, 00:19:04.694 "data_offset": 2048, 00:19:04.694 "data_size": 63488 00:19:04.694 } 00:19:04.694 ] 00:19:04.694 }' 00:19:04.694 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:04.953 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:04.953 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:04.953 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:04.953 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.953 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:04.953 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.953 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.953 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.953 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:04.953 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:04.953 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.953 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.953 [2024-11-05 16:33:17.878789] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:04.953 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.953 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:04.953 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:04.953 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:04.953 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:04.953 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:04.953 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:04.953 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.953 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.953 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.953 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.953 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.953 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.953 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.953 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.953 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.953 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.953 "name": "raid_bdev1", 00:19:04.953 "uuid": "ff992a3f-69e2-4d6b-9906-e99ee1a9e9a9", 00:19:04.953 "strip_size_kb": 64, 00:19:04.953 "state": "online", 00:19:04.953 "raid_level": "raid5f", 00:19:04.953 "superblock": true, 00:19:04.953 "num_base_bdevs": 4, 00:19:04.953 "num_base_bdevs_discovered": 3, 00:19:04.953 "num_base_bdevs_operational": 3, 00:19:04.953 "base_bdevs_list": [ 00:19:04.953 { 00:19:04.953 "name": null, 00:19:04.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.953 "is_configured": false, 00:19:04.953 "data_offset": 0, 00:19:04.953 "data_size": 63488 00:19:04.953 }, 00:19:04.953 { 00:19:04.953 "name": "BaseBdev2", 00:19:04.953 "uuid": "0e48e87d-001c-5542-8ed9-98b5a1c6718c", 00:19:04.953 "is_configured": true, 00:19:04.953 "data_offset": 2048, 00:19:04.953 "data_size": 63488 00:19:04.953 }, 00:19:04.953 { 00:19:04.953 "name": "BaseBdev3", 00:19:04.953 "uuid": "12362662-771b-575b-84e2-651e9f2a344f", 00:19:04.953 "is_configured": true, 00:19:04.953 "data_offset": 2048, 00:19:04.953 "data_size": 63488 00:19:04.953 }, 00:19:04.953 { 00:19:04.953 "name": "BaseBdev4", 00:19:04.953 "uuid": "9e426153-fbac-5187-8e00-6798c04ca4d8", 00:19:04.953 "is_configured": true, 00:19:04.953 "data_offset": 2048, 00:19:04.953 "data_size": 63488 00:19:04.953 } 00:19:04.953 ] 00:19:04.953 }' 00:19:04.953 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.953 16:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.521 16:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:05.521 16:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.521 16:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.521 [2024-11-05 16:33:18.342074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:05.521 [2024-11-05 16:33:18.342340] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:05.521 [2024-11-05 16:33:18.342405] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:05.521 [2024-11-05 16:33:18.342475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:05.521 [2024-11-05 16:33:18.357884] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:19:05.521 16:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.521 16:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:05.521 [2024-11-05 16:33:18.367249] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:06.459 16:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:06.459 16:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:06.459 16:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:06.459 16:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:06.459 16:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:06.459 16:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.459 16:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.460 16:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.460 16:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.460 16:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.460 16:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:06.460 "name": "raid_bdev1", 00:19:06.460 "uuid": "ff992a3f-69e2-4d6b-9906-e99ee1a9e9a9", 00:19:06.460 "strip_size_kb": 64, 00:19:06.460 "state": "online", 00:19:06.460 "raid_level": "raid5f", 00:19:06.460 "superblock": true, 00:19:06.460 "num_base_bdevs": 4, 00:19:06.460 "num_base_bdevs_discovered": 4, 00:19:06.460 "num_base_bdevs_operational": 4, 00:19:06.460 "process": { 00:19:06.460 "type": "rebuild", 00:19:06.460 "target": "spare", 00:19:06.460 "progress": { 00:19:06.460 "blocks": 19200, 00:19:06.460 "percent": 10 00:19:06.460 } 00:19:06.460 }, 00:19:06.460 "base_bdevs_list": [ 00:19:06.460 { 00:19:06.460 "name": "spare", 00:19:06.460 "uuid": "0af184a8-d264-565f-b42e-260f853c6ca1", 00:19:06.460 "is_configured": true, 00:19:06.460 "data_offset": 2048, 00:19:06.460 "data_size": 63488 00:19:06.460 }, 00:19:06.460 { 00:19:06.460 "name": "BaseBdev2", 00:19:06.460 "uuid": "0e48e87d-001c-5542-8ed9-98b5a1c6718c", 00:19:06.460 "is_configured": true, 00:19:06.460 "data_offset": 2048, 00:19:06.460 "data_size": 63488 00:19:06.460 }, 00:19:06.460 { 00:19:06.460 "name": "BaseBdev3", 00:19:06.460 "uuid": "12362662-771b-575b-84e2-651e9f2a344f", 00:19:06.460 "is_configured": true, 00:19:06.460 "data_offset": 2048, 00:19:06.460 "data_size": 63488 00:19:06.460 }, 00:19:06.460 { 00:19:06.460 "name": "BaseBdev4", 00:19:06.460 "uuid": "9e426153-fbac-5187-8e00-6798c04ca4d8", 00:19:06.460 "is_configured": true, 00:19:06.460 "data_offset": 2048, 00:19:06.460 "data_size": 63488 00:19:06.460 } 00:19:06.460 ] 00:19:06.460 }' 00:19:06.460 16:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:06.460 16:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:06.460 16:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:06.460 16:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:06.460 16:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:06.460 16:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.460 16:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.460 [2024-11-05 16:33:19.518725] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:06.721 [2024-11-05 16:33:19.575367] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:06.721 [2024-11-05 16:33:19.575439] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:06.721 [2024-11-05 16:33:19.575458] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:06.721 [2024-11-05 16:33:19.575473] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:06.721 16:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.721 16:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:06.721 16:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:06.721 16:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:06.721 16:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:06.721 16:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:06.721 16:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:06.721 16:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.721 16:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.721 16:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.721 16:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.721 16:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.721 16:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.721 16:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.721 16:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.721 16:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.721 16:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.721 "name": "raid_bdev1", 00:19:06.721 "uuid": "ff992a3f-69e2-4d6b-9906-e99ee1a9e9a9", 00:19:06.721 "strip_size_kb": 64, 00:19:06.721 "state": "online", 00:19:06.721 "raid_level": "raid5f", 00:19:06.721 "superblock": true, 00:19:06.721 "num_base_bdevs": 4, 00:19:06.721 "num_base_bdevs_discovered": 3, 00:19:06.721 "num_base_bdevs_operational": 3, 00:19:06.721 "base_bdevs_list": [ 00:19:06.721 { 00:19:06.721 "name": null, 00:19:06.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.721 "is_configured": false, 00:19:06.721 "data_offset": 0, 00:19:06.721 "data_size": 63488 00:19:06.721 }, 00:19:06.721 { 00:19:06.721 "name": "BaseBdev2", 00:19:06.721 "uuid": "0e48e87d-001c-5542-8ed9-98b5a1c6718c", 00:19:06.721 "is_configured": true, 00:19:06.721 "data_offset": 2048, 00:19:06.721 "data_size": 63488 00:19:06.721 }, 00:19:06.721 { 00:19:06.721 "name": "BaseBdev3", 00:19:06.721 "uuid": "12362662-771b-575b-84e2-651e9f2a344f", 00:19:06.721 "is_configured": true, 00:19:06.721 "data_offset": 2048, 00:19:06.721 "data_size": 63488 00:19:06.721 }, 00:19:06.721 { 00:19:06.721 "name": "BaseBdev4", 00:19:06.721 "uuid": "9e426153-fbac-5187-8e00-6798c04ca4d8", 00:19:06.721 "is_configured": true, 00:19:06.721 "data_offset": 2048, 00:19:06.721 "data_size": 63488 00:19:06.721 } 00:19:06.721 ] 00:19:06.721 }' 00:19:06.721 16:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.721 16:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.982 16:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:06.982 16:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.982 16:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.982 [2024-11-05 16:33:19.990727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:06.982 [2024-11-05 16:33:19.990854] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:06.982 [2024-11-05 16:33:19.990914] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:19:06.982 [2024-11-05 16:33:19.990947] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:06.982 [2024-11-05 16:33:19.991478] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:06.982 [2024-11-05 16:33:19.991565] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:06.982 [2024-11-05 16:33:19.991710] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:06.982 [2024-11-05 16:33:19.991759] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:06.982 [2024-11-05 16:33:19.991812] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:06.982 [2024-11-05 16:33:19.991878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:06.982 [2024-11-05 16:33:20.007923] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:19:06.982 spare 00:19:06.982 16:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.982 16:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:06.982 [2024-11-05 16:33:20.019166] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:07.949 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:07.949 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:07.949 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:07.949 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:07.949 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:07.950 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.950 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.950 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.950 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.209 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.209 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:08.209 "name": "raid_bdev1", 00:19:08.209 "uuid": "ff992a3f-69e2-4d6b-9906-e99ee1a9e9a9", 00:19:08.209 "strip_size_kb": 64, 00:19:08.209 "state": "online", 00:19:08.209 "raid_level": "raid5f", 00:19:08.209 "superblock": true, 00:19:08.209 "num_base_bdevs": 4, 00:19:08.209 "num_base_bdevs_discovered": 4, 00:19:08.209 "num_base_bdevs_operational": 4, 00:19:08.209 "process": { 00:19:08.209 "type": "rebuild", 00:19:08.209 "target": "spare", 00:19:08.209 "progress": { 00:19:08.209 "blocks": 19200, 00:19:08.209 "percent": 10 00:19:08.209 } 00:19:08.209 }, 00:19:08.209 "base_bdevs_list": [ 00:19:08.209 { 00:19:08.209 "name": "spare", 00:19:08.209 "uuid": "0af184a8-d264-565f-b42e-260f853c6ca1", 00:19:08.209 "is_configured": true, 00:19:08.209 "data_offset": 2048, 00:19:08.209 "data_size": 63488 00:19:08.209 }, 00:19:08.209 { 00:19:08.209 "name": "BaseBdev2", 00:19:08.209 "uuid": "0e48e87d-001c-5542-8ed9-98b5a1c6718c", 00:19:08.209 "is_configured": true, 00:19:08.209 "data_offset": 2048, 00:19:08.209 "data_size": 63488 00:19:08.209 }, 00:19:08.209 { 00:19:08.209 "name": "BaseBdev3", 00:19:08.209 "uuid": "12362662-771b-575b-84e2-651e9f2a344f", 00:19:08.209 "is_configured": true, 00:19:08.209 "data_offset": 2048, 00:19:08.209 "data_size": 63488 00:19:08.209 }, 00:19:08.209 { 00:19:08.209 "name": "BaseBdev4", 00:19:08.210 "uuid": "9e426153-fbac-5187-8e00-6798c04ca4d8", 00:19:08.210 "is_configured": true, 00:19:08.210 "data_offset": 2048, 00:19:08.210 "data_size": 63488 00:19:08.210 } 00:19:08.210 ] 00:19:08.210 }' 00:19:08.210 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:08.210 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:08.210 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:08.210 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:08.210 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:08.210 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.210 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.210 [2024-11-05 16:33:21.153839] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:08.210 [2024-11-05 16:33:21.227166] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:08.210 [2024-11-05 16:33:21.227217] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:08.210 [2024-11-05 16:33:21.227236] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:08.210 [2024-11-05 16:33:21.227243] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:08.210 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.210 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:08.210 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:08.210 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:08.210 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:08.210 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:08.210 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:08.210 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:08.210 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:08.210 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:08.210 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:08.210 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.210 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.210 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.210 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.210 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.469 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:08.469 "name": "raid_bdev1", 00:19:08.469 "uuid": "ff992a3f-69e2-4d6b-9906-e99ee1a9e9a9", 00:19:08.469 "strip_size_kb": 64, 00:19:08.469 "state": "online", 00:19:08.470 "raid_level": "raid5f", 00:19:08.470 "superblock": true, 00:19:08.470 "num_base_bdevs": 4, 00:19:08.470 "num_base_bdevs_discovered": 3, 00:19:08.470 "num_base_bdevs_operational": 3, 00:19:08.470 "base_bdevs_list": [ 00:19:08.470 { 00:19:08.470 "name": null, 00:19:08.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.470 "is_configured": false, 00:19:08.470 "data_offset": 0, 00:19:08.470 "data_size": 63488 00:19:08.470 }, 00:19:08.470 { 00:19:08.470 "name": "BaseBdev2", 00:19:08.470 "uuid": "0e48e87d-001c-5542-8ed9-98b5a1c6718c", 00:19:08.470 "is_configured": true, 00:19:08.470 "data_offset": 2048, 00:19:08.470 "data_size": 63488 00:19:08.470 }, 00:19:08.470 { 00:19:08.470 "name": "BaseBdev3", 00:19:08.470 "uuid": "12362662-771b-575b-84e2-651e9f2a344f", 00:19:08.470 "is_configured": true, 00:19:08.470 "data_offset": 2048, 00:19:08.470 "data_size": 63488 00:19:08.470 }, 00:19:08.470 { 00:19:08.470 "name": "BaseBdev4", 00:19:08.470 "uuid": "9e426153-fbac-5187-8e00-6798c04ca4d8", 00:19:08.470 "is_configured": true, 00:19:08.470 "data_offset": 2048, 00:19:08.470 "data_size": 63488 00:19:08.470 } 00:19:08.470 ] 00:19:08.470 }' 00:19:08.470 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:08.470 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.729 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:08.729 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:08.729 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:08.729 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:08.729 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:08.729 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.729 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.729 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.729 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.729 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.729 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:08.729 "name": "raid_bdev1", 00:19:08.729 "uuid": "ff992a3f-69e2-4d6b-9906-e99ee1a9e9a9", 00:19:08.729 "strip_size_kb": 64, 00:19:08.729 "state": "online", 00:19:08.729 "raid_level": "raid5f", 00:19:08.729 "superblock": true, 00:19:08.729 "num_base_bdevs": 4, 00:19:08.729 "num_base_bdevs_discovered": 3, 00:19:08.729 "num_base_bdevs_operational": 3, 00:19:08.729 "base_bdevs_list": [ 00:19:08.729 { 00:19:08.729 "name": null, 00:19:08.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.729 "is_configured": false, 00:19:08.729 "data_offset": 0, 00:19:08.729 "data_size": 63488 00:19:08.729 }, 00:19:08.729 { 00:19:08.729 "name": "BaseBdev2", 00:19:08.729 "uuid": "0e48e87d-001c-5542-8ed9-98b5a1c6718c", 00:19:08.729 "is_configured": true, 00:19:08.729 "data_offset": 2048, 00:19:08.729 "data_size": 63488 00:19:08.729 }, 00:19:08.729 { 00:19:08.729 "name": "BaseBdev3", 00:19:08.729 "uuid": "12362662-771b-575b-84e2-651e9f2a344f", 00:19:08.729 "is_configured": true, 00:19:08.729 "data_offset": 2048, 00:19:08.729 "data_size": 63488 00:19:08.729 }, 00:19:08.729 { 00:19:08.729 "name": "BaseBdev4", 00:19:08.729 "uuid": "9e426153-fbac-5187-8e00-6798c04ca4d8", 00:19:08.729 "is_configured": true, 00:19:08.729 "data_offset": 2048, 00:19:08.729 "data_size": 63488 00:19:08.729 } 00:19:08.729 ] 00:19:08.729 }' 00:19:08.729 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:08.988 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:08.988 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:08.988 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:08.988 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:08.988 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.988 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.988 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.988 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:08.988 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.988 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.989 [2024-11-05 16:33:21.897972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:08.989 [2024-11-05 16:33:21.898071] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:08.989 [2024-11-05 16:33:21.898101] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:19:08.989 [2024-11-05 16:33:21.898111] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:08.989 [2024-11-05 16:33:21.898607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:08.989 [2024-11-05 16:33:21.898629] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:08.989 [2024-11-05 16:33:21.898716] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:08.989 [2024-11-05 16:33:21.898732] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:08.989 [2024-11-05 16:33:21.898745] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:08.989 [2024-11-05 16:33:21.898755] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:08.989 BaseBdev1 00:19:08.989 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.989 16:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:09.927 16:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:09.927 16:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:09.927 16:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:09.927 16:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:09.927 16:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:09.927 16:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:09.927 16:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.927 16:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.927 16:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.927 16:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.927 16:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.927 16:33:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.927 16:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.927 16:33:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.927 16:33:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.927 16:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.927 "name": "raid_bdev1", 00:19:09.927 "uuid": "ff992a3f-69e2-4d6b-9906-e99ee1a9e9a9", 00:19:09.927 "strip_size_kb": 64, 00:19:09.927 "state": "online", 00:19:09.927 "raid_level": "raid5f", 00:19:09.927 "superblock": true, 00:19:09.927 "num_base_bdevs": 4, 00:19:09.927 "num_base_bdevs_discovered": 3, 00:19:09.927 "num_base_bdevs_operational": 3, 00:19:09.927 "base_bdevs_list": [ 00:19:09.927 { 00:19:09.927 "name": null, 00:19:09.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.927 "is_configured": false, 00:19:09.927 "data_offset": 0, 00:19:09.927 "data_size": 63488 00:19:09.927 }, 00:19:09.927 { 00:19:09.927 "name": "BaseBdev2", 00:19:09.927 "uuid": "0e48e87d-001c-5542-8ed9-98b5a1c6718c", 00:19:09.927 "is_configured": true, 00:19:09.927 "data_offset": 2048, 00:19:09.927 "data_size": 63488 00:19:09.927 }, 00:19:09.927 { 00:19:09.927 "name": "BaseBdev3", 00:19:09.927 "uuid": "12362662-771b-575b-84e2-651e9f2a344f", 00:19:09.927 "is_configured": true, 00:19:09.927 "data_offset": 2048, 00:19:09.927 "data_size": 63488 00:19:09.927 }, 00:19:09.927 { 00:19:09.927 "name": "BaseBdev4", 00:19:09.927 "uuid": "9e426153-fbac-5187-8e00-6798c04ca4d8", 00:19:09.927 "is_configured": true, 00:19:09.927 "data_offset": 2048, 00:19:09.927 "data_size": 63488 00:19:09.927 } 00:19:09.927 ] 00:19:09.927 }' 00:19:09.927 16:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.927 16:33:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.497 16:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:10.497 16:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:10.497 16:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:10.497 16:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:10.497 16:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:10.497 16:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.497 16:33:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.497 16:33:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.497 16:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.497 16:33:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.497 16:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:10.497 "name": "raid_bdev1", 00:19:10.497 "uuid": "ff992a3f-69e2-4d6b-9906-e99ee1a9e9a9", 00:19:10.497 "strip_size_kb": 64, 00:19:10.497 "state": "online", 00:19:10.497 "raid_level": "raid5f", 00:19:10.497 "superblock": true, 00:19:10.497 "num_base_bdevs": 4, 00:19:10.497 "num_base_bdevs_discovered": 3, 00:19:10.497 "num_base_bdevs_operational": 3, 00:19:10.497 "base_bdevs_list": [ 00:19:10.497 { 00:19:10.497 "name": null, 00:19:10.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.497 "is_configured": false, 00:19:10.497 "data_offset": 0, 00:19:10.497 "data_size": 63488 00:19:10.497 }, 00:19:10.497 { 00:19:10.497 "name": "BaseBdev2", 00:19:10.497 "uuid": "0e48e87d-001c-5542-8ed9-98b5a1c6718c", 00:19:10.497 "is_configured": true, 00:19:10.497 "data_offset": 2048, 00:19:10.497 "data_size": 63488 00:19:10.497 }, 00:19:10.497 { 00:19:10.497 "name": "BaseBdev3", 00:19:10.497 "uuid": "12362662-771b-575b-84e2-651e9f2a344f", 00:19:10.497 "is_configured": true, 00:19:10.497 "data_offset": 2048, 00:19:10.497 "data_size": 63488 00:19:10.497 }, 00:19:10.497 { 00:19:10.497 "name": "BaseBdev4", 00:19:10.497 "uuid": "9e426153-fbac-5187-8e00-6798c04ca4d8", 00:19:10.497 "is_configured": true, 00:19:10.497 "data_offset": 2048, 00:19:10.497 "data_size": 63488 00:19:10.497 } 00:19:10.497 ] 00:19:10.497 }' 00:19:10.497 16:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:10.497 16:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:10.497 16:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:10.497 16:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:10.497 16:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:10.497 16:33:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:19:10.497 16:33:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:10.497 16:33:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:10.497 16:33:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:10.497 16:33:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:10.497 16:33:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:10.497 16:33:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:10.497 16:33:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.497 16:33:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.497 [2024-11-05 16:33:23.479654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:10.497 [2024-11-05 16:33:23.479930] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:10.497 [2024-11-05 16:33:23.479971] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:10.497 request: 00:19:10.497 { 00:19:10.497 "base_bdev": "BaseBdev1", 00:19:10.497 "raid_bdev": "raid_bdev1", 00:19:10.497 "method": "bdev_raid_add_base_bdev", 00:19:10.497 "req_id": 1 00:19:10.497 } 00:19:10.497 Got JSON-RPC error response 00:19:10.497 response: 00:19:10.497 { 00:19:10.497 "code": -22, 00:19:10.497 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:10.497 } 00:19:10.497 16:33:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:10.497 16:33:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:19:10.497 16:33:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:10.497 16:33:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:10.497 16:33:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:10.497 16:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:11.435 16:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:11.435 16:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:11.435 16:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:11.435 16:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:11.435 16:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:11.435 16:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:11.435 16:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.435 16:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.435 16:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.435 16:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.435 16:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.435 16:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.435 16:33:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.435 16:33:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.435 16:33:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.695 16:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.695 "name": "raid_bdev1", 00:19:11.695 "uuid": "ff992a3f-69e2-4d6b-9906-e99ee1a9e9a9", 00:19:11.695 "strip_size_kb": 64, 00:19:11.695 "state": "online", 00:19:11.695 "raid_level": "raid5f", 00:19:11.695 "superblock": true, 00:19:11.695 "num_base_bdevs": 4, 00:19:11.695 "num_base_bdevs_discovered": 3, 00:19:11.695 "num_base_bdevs_operational": 3, 00:19:11.695 "base_bdevs_list": [ 00:19:11.695 { 00:19:11.695 "name": null, 00:19:11.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.695 "is_configured": false, 00:19:11.695 "data_offset": 0, 00:19:11.695 "data_size": 63488 00:19:11.695 }, 00:19:11.695 { 00:19:11.695 "name": "BaseBdev2", 00:19:11.695 "uuid": "0e48e87d-001c-5542-8ed9-98b5a1c6718c", 00:19:11.695 "is_configured": true, 00:19:11.695 "data_offset": 2048, 00:19:11.695 "data_size": 63488 00:19:11.695 }, 00:19:11.695 { 00:19:11.695 "name": "BaseBdev3", 00:19:11.695 "uuid": "12362662-771b-575b-84e2-651e9f2a344f", 00:19:11.695 "is_configured": true, 00:19:11.695 "data_offset": 2048, 00:19:11.695 "data_size": 63488 00:19:11.695 }, 00:19:11.695 { 00:19:11.695 "name": "BaseBdev4", 00:19:11.695 "uuid": "9e426153-fbac-5187-8e00-6798c04ca4d8", 00:19:11.695 "is_configured": true, 00:19:11.695 "data_offset": 2048, 00:19:11.695 "data_size": 63488 00:19:11.695 } 00:19:11.695 ] 00:19:11.695 }' 00:19:11.695 16:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.695 16:33:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.954 16:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:11.954 16:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:11.954 16:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:11.954 16:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:11.954 16:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:11.954 16:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.954 16:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.954 16:33:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.954 16:33:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.955 16:33:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.955 16:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:11.955 "name": "raid_bdev1", 00:19:11.955 "uuid": "ff992a3f-69e2-4d6b-9906-e99ee1a9e9a9", 00:19:11.955 "strip_size_kb": 64, 00:19:11.955 "state": "online", 00:19:11.955 "raid_level": "raid5f", 00:19:11.955 "superblock": true, 00:19:11.955 "num_base_bdevs": 4, 00:19:11.955 "num_base_bdevs_discovered": 3, 00:19:11.955 "num_base_bdevs_operational": 3, 00:19:11.955 "base_bdevs_list": [ 00:19:11.955 { 00:19:11.955 "name": null, 00:19:11.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.955 "is_configured": false, 00:19:11.955 "data_offset": 0, 00:19:11.955 "data_size": 63488 00:19:11.955 }, 00:19:11.955 { 00:19:11.955 "name": "BaseBdev2", 00:19:11.955 "uuid": "0e48e87d-001c-5542-8ed9-98b5a1c6718c", 00:19:11.955 "is_configured": true, 00:19:11.955 "data_offset": 2048, 00:19:11.955 "data_size": 63488 00:19:11.955 }, 00:19:11.955 { 00:19:11.955 "name": "BaseBdev3", 00:19:11.955 "uuid": "12362662-771b-575b-84e2-651e9f2a344f", 00:19:11.955 "is_configured": true, 00:19:11.955 "data_offset": 2048, 00:19:11.955 "data_size": 63488 00:19:11.955 }, 00:19:11.955 { 00:19:11.955 "name": "BaseBdev4", 00:19:11.955 "uuid": "9e426153-fbac-5187-8e00-6798c04ca4d8", 00:19:11.955 "is_configured": true, 00:19:11.955 "data_offset": 2048, 00:19:11.955 "data_size": 63488 00:19:11.955 } 00:19:11.955 ] 00:19:11.955 }' 00:19:11.955 16:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:12.214 16:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:12.214 16:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:12.214 16:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:12.214 16:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85480 00:19:12.214 16:33:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 85480 ']' 00:19:12.214 16:33:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 85480 00:19:12.214 16:33:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:19:12.214 16:33:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:12.214 16:33:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85480 00:19:12.214 16:33:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:12.214 16:33:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:12.214 16:33:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85480' 00:19:12.214 killing process with pid 85480 00:19:12.214 Received shutdown signal, test time was about 60.000000 seconds 00:19:12.214 00:19:12.214 Latency(us) 00:19:12.214 [2024-11-05T16:33:25.302Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.214 [2024-11-05T16:33:25.302Z] =================================================================================================================== 00:19:12.214 [2024-11-05T16:33:25.302Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:12.214 16:33:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 85480 00:19:12.214 [2024-11-05 16:33:25.144086] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:12.214 [2024-11-05 16:33:25.144255] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:12.214 16:33:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 85480 00:19:12.214 [2024-11-05 16:33:25.144344] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:12.215 [2024-11-05 16:33:25.144358] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:12.784 [2024-11-05 16:33:25.644736] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:13.725 ************************************ 00:19:13.725 END TEST raid5f_rebuild_test_sb 00:19:13.725 ************************************ 00:19:13.725 16:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:19:13.725 00:19:13.725 real 0m27.221s 00:19:13.725 user 0m34.316s 00:19:13.725 sys 0m3.010s 00:19:13.725 16:33:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:13.725 16:33:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.725 16:33:26 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:19:13.725 16:33:26 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:19:13.725 16:33:26 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:19:13.725 16:33:26 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:13.725 16:33:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:13.987 ************************************ 00:19:13.987 START TEST raid_state_function_test_sb_4k 00:19:13.987 ************************************ 00:19:13.987 16:33:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:19:13.987 16:33:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:13.987 16:33:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:13.987 16:33:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:13.987 16:33:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:13.987 16:33:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:13.987 16:33:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:13.987 16:33:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:13.987 16:33:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:13.987 16:33:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:13.987 16:33:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:13.987 16:33:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:13.987 16:33:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:13.987 16:33:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:13.987 16:33:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:13.987 16:33:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:13.987 16:33:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:13.987 16:33:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:13.987 16:33:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:13.987 16:33:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:13.987 16:33:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:13.987 16:33:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:13.987 16:33:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:13.987 16:33:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86291 00:19:13.987 16:33:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:13.987 16:33:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86291' 00:19:13.987 Process raid pid: 86291 00:19:13.987 16:33:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86291 00:19:13.987 16:33:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@833 -- # '[' -z 86291 ']' 00:19:13.987 16:33:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:13.987 16:33:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:13.987 16:33:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:13.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:13.987 16:33:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:13.987 16:33:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:13.987 [2024-11-05 16:33:26.924497] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:19:13.987 [2024-11-05 16:33:26.924725] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:14.247 [2024-11-05 16:33:27.099966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.247 [2024-11-05 16:33:27.218988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.507 [2024-11-05 16:33:27.422461] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:14.507 [2024-11-05 16:33:27.422618] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:14.767 16:33:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:14.767 16:33:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@866 -- # return 0 00:19:14.767 16:33:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:14.767 16:33:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.767 16:33:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:14.767 [2024-11-05 16:33:27.765760] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:14.767 [2024-11-05 16:33:27.765858] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:14.767 [2024-11-05 16:33:27.765888] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:14.767 [2024-11-05 16:33:27.765928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:14.767 16:33:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.767 16:33:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:14.767 16:33:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:14.767 16:33:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:14.767 16:33:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:14.767 16:33:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:14.767 16:33:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:14.767 16:33:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:14.767 16:33:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:14.767 16:33:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:14.767 16:33:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:14.767 16:33:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.767 16:33:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:14.767 16:33:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.767 16:33:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:14.768 16:33:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.768 16:33:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:14.768 "name": "Existed_Raid", 00:19:14.768 "uuid": "7ef1a657-8367-43ff-8290-626a679af660", 00:19:14.768 "strip_size_kb": 0, 00:19:14.768 "state": "configuring", 00:19:14.768 "raid_level": "raid1", 00:19:14.768 "superblock": true, 00:19:14.768 "num_base_bdevs": 2, 00:19:14.768 "num_base_bdevs_discovered": 0, 00:19:14.768 "num_base_bdevs_operational": 2, 00:19:14.768 "base_bdevs_list": [ 00:19:14.768 { 00:19:14.768 "name": "BaseBdev1", 00:19:14.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.768 "is_configured": false, 00:19:14.768 "data_offset": 0, 00:19:14.768 "data_size": 0 00:19:14.768 }, 00:19:14.768 { 00:19:14.768 "name": "BaseBdev2", 00:19:14.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.768 "is_configured": false, 00:19:14.768 "data_offset": 0, 00:19:14.768 "data_size": 0 00:19:14.768 } 00:19:14.768 ] 00:19:14.768 }' 00:19:14.768 16:33:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:14.768 16:33:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.336 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:15.336 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.336 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.336 [2024-11-05 16:33:28.212951] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:15.336 [2024-11-05 16:33:28.212985] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:15.336 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.336 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:15.336 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.336 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.336 [2024-11-05 16:33:28.220923] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:15.336 [2024-11-05 16:33:28.220963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:15.336 [2024-11-05 16:33:28.220972] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:15.336 [2024-11-05 16:33:28.220984] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:15.336 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.336 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:19:15.336 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.336 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.337 [2024-11-05 16:33:28.268337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:15.337 BaseBdev1 00:19:15.337 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.337 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:15.337 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:19:15.337 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:15.337 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local i 00:19:15.337 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:15.337 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:15.337 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:15.337 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.337 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.337 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.337 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:15.337 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.337 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.337 [ 00:19:15.337 { 00:19:15.337 "name": "BaseBdev1", 00:19:15.337 "aliases": [ 00:19:15.337 "5158d83e-ccf7-4f51-8b01-54715eaeb63e" 00:19:15.337 ], 00:19:15.337 "product_name": "Malloc disk", 00:19:15.337 "block_size": 4096, 00:19:15.337 "num_blocks": 8192, 00:19:15.337 "uuid": "5158d83e-ccf7-4f51-8b01-54715eaeb63e", 00:19:15.337 "assigned_rate_limits": { 00:19:15.337 "rw_ios_per_sec": 0, 00:19:15.337 "rw_mbytes_per_sec": 0, 00:19:15.337 "r_mbytes_per_sec": 0, 00:19:15.337 "w_mbytes_per_sec": 0 00:19:15.337 }, 00:19:15.337 "claimed": true, 00:19:15.337 "claim_type": "exclusive_write", 00:19:15.337 "zoned": false, 00:19:15.337 "supported_io_types": { 00:19:15.337 "read": true, 00:19:15.337 "write": true, 00:19:15.337 "unmap": true, 00:19:15.337 "flush": true, 00:19:15.337 "reset": true, 00:19:15.337 "nvme_admin": false, 00:19:15.337 "nvme_io": false, 00:19:15.337 "nvme_io_md": false, 00:19:15.337 "write_zeroes": true, 00:19:15.337 "zcopy": true, 00:19:15.337 "get_zone_info": false, 00:19:15.337 "zone_management": false, 00:19:15.337 "zone_append": false, 00:19:15.337 "compare": false, 00:19:15.337 "compare_and_write": false, 00:19:15.337 "abort": true, 00:19:15.337 "seek_hole": false, 00:19:15.337 "seek_data": false, 00:19:15.337 "copy": true, 00:19:15.337 "nvme_iov_md": false 00:19:15.337 }, 00:19:15.337 "memory_domains": [ 00:19:15.337 { 00:19:15.337 "dma_device_id": "system", 00:19:15.337 "dma_device_type": 1 00:19:15.337 }, 00:19:15.337 { 00:19:15.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:15.337 "dma_device_type": 2 00:19:15.337 } 00:19:15.337 ], 00:19:15.337 "driver_specific": {} 00:19:15.337 } 00:19:15.337 ] 00:19:15.337 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.337 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # return 0 00:19:15.337 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:15.337 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:15.337 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:15.337 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:15.337 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:15.337 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:15.337 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.337 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.337 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.337 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.337 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.337 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:15.337 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.337 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.337 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.337 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.337 "name": "Existed_Raid", 00:19:15.337 "uuid": "5ea696eb-cae5-46b1-a127-5f9f13f2ddb2", 00:19:15.337 "strip_size_kb": 0, 00:19:15.337 "state": "configuring", 00:19:15.337 "raid_level": "raid1", 00:19:15.337 "superblock": true, 00:19:15.337 "num_base_bdevs": 2, 00:19:15.337 "num_base_bdevs_discovered": 1, 00:19:15.337 "num_base_bdevs_operational": 2, 00:19:15.337 "base_bdevs_list": [ 00:19:15.337 { 00:19:15.337 "name": "BaseBdev1", 00:19:15.337 "uuid": "5158d83e-ccf7-4f51-8b01-54715eaeb63e", 00:19:15.337 "is_configured": true, 00:19:15.337 "data_offset": 256, 00:19:15.337 "data_size": 7936 00:19:15.337 }, 00:19:15.337 { 00:19:15.337 "name": "BaseBdev2", 00:19:15.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.337 "is_configured": false, 00:19:15.337 "data_offset": 0, 00:19:15.337 "data_size": 0 00:19:15.337 } 00:19:15.337 ] 00:19:15.337 }' 00:19:15.337 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.337 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.907 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:15.907 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.907 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.907 [2024-11-05 16:33:28.767605] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:15.907 [2024-11-05 16:33:28.767702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:15.907 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.907 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:15.907 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.907 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.907 [2024-11-05 16:33:28.779642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:15.907 [2024-11-05 16:33:28.781497] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:15.907 [2024-11-05 16:33:28.781584] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:15.907 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.907 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:15.907 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:15.907 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:15.907 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:15.907 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:15.907 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:15.907 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:15.907 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:15.907 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.907 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.907 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.907 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.907 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:15.907 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.907 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.907 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.907 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.907 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.907 "name": "Existed_Raid", 00:19:15.907 "uuid": "dd30121d-3f8e-4fe1-86b8-c4ec2003f974", 00:19:15.907 "strip_size_kb": 0, 00:19:15.907 "state": "configuring", 00:19:15.907 "raid_level": "raid1", 00:19:15.907 "superblock": true, 00:19:15.907 "num_base_bdevs": 2, 00:19:15.907 "num_base_bdevs_discovered": 1, 00:19:15.907 "num_base_bdevs_operational": 2, 00:19:15.907 "base_bdevs_list": [ 00:19:15.907 { 00:19:15.907 "name": "BaseBdev1", 00:19:15.907 "uuid": "5158d83e-ccf7-4f51-8b01-54715eaeb63e", 00:19:15.907 "is_configured": true, 00:19:15.907 "data_offset": 256, 00:19:15.907 "data_size": 7936 00:19:15.907 }, 00:19:15.907 { 00:19:15.907 "name": "BaseBdev2", 00:19:15.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.907 "is_configured": false, 00:19:15.907 "data_offset": 0, 00:19:15.907 "data_size": 0 00:19:15.907 } 00:19:15.907 ] 00:19:15.907 }' 00:19:15.907 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.907 16:33:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.166 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:19:16.166 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.166 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.166 [2024-11-05 16:33:29.221706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:16.166 [2024-11-05 16:33:29.222062] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:16.166 [2024-11-05 16:33:29.222081] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:16.166 [2024-11-05 16:33:29.222344] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:16.166 [2024-11-05 16:33:29.222489] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:16.166 [2024-11-05 16:33:29.222502] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:16.167 [2024-11-05 16:33:29.222663] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:16.167 BaseBdev2 00:19:16.167 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.167 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:16.167 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:19:16.167 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:16.167 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local i 00:19:16.167 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:16.167 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:16.167 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:16.167 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.167 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.167 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.167 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:16.167 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.167 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.427 [ 00:19:16.427 { 00:19:16.427 "name": "BaseBdev2", 00:19:16.427 "aliases": [ 00:19:16.427 "b0fbd955-83b9-42ee-839f-2ca12727dff8" 00:19:16.427 ], 00:19:16.427 "product_name": "Malloc disk", 00:19:16.427 "block_size": 4096, 00:19:16.427 "num_blocks": 8192, 00:19:16.427 "uuid": "b0fbd955-83b9-42ee-839f-2ca12727dff8", 00:19:16.427 "assigned_rate_limits": { 00:19:16.427 "rw_ios_per_sec": 0, 00:19:16.427 "rw_mbytes_per_sec": 0, 00:19:16.427 "r_mbytes_per_sec": 0, 00:19:16.427 "w_mbytes_per_sec": 0 00:19:16.427 }, 00:19:16.427 "claimed": true, 00:19:16.427 "claim_type": "exclusive_write", 00:19:16.427 "zoned": false, 00:19:16.427 "supported_io_types": { 00:19:16.427 "read": true, 00:19:16.427 "write": true, 00:19:16.427 "unmap": true, 00:19:16.427 "flush": true, 00:19:16.427 "reset": true, 00:19:16.427 "nvme_admin": false, 00:19:16.427 "nvme_io": false, 00:19:16.427 "nvme_io_md": false, 00:19:16.427 "write_zeroes": true, 00:19:16.427 "zcopy": true, 00:19:16.427 "get_zone_info": false, 00:19:16.427 "zone_management": false, 00:19:16.427 "zone_append": false, 00:19:16.427 "compare": false, 00:19:16.427 "compare_and_write": false, 00:19:16.427 "abort": true, 00:19:16.427 "seek_hole": false, 00:19:16.427 "seek_data": false, 00:19:16.427 "copy": true, 00:19:16.427 "nvme_iov_md": false 00:19:16.427 }, 00:19:16.427 "memory_domains": [ 00:19:16.427 { 00:19:16.427 "dma_device_id": "system", 00:19:16.427 "dma_device_type": 1 00:19:16.427 }, 00:19:16.427 { 00:19:16.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:16.427 "dma_device_type": 2 00:19:16.427 } 00:19:16.427 ], 00:19:16.427 "driver_specific": {} 00:19:16.427 } 00:19:16.427 ] 00:19:16.427 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.427 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # return 0 00:19:16.427 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:16.427 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:16.427 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:16.427 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:16.427 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:16.427 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:16.427 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:16.427 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:16.427 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:16.427 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:16.427 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:16.427 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:16.427 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.427 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:16.427 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.427 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.427 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.427 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:16.427 "name": "Existed_Raid", 00:19:16.427 "uuid": "dd30121d-3f8e-4fe1-86b8-c4ec2003f974", 00:19:16.427 "strip_size_kb": 0, 00:19:16.427 "state": "online", 00:19:16.427 "raid_level": "raid1", 00:19:16.427 "superblock": true, 00:19:16.427 "num_base_bdevs": 2, 00:19:16.427 "num_base_bdevs_discovered": 2, 00:19:16.427 "num_base_bdevs_operational": 2, 00:19:16.427 "base_bdevs_list": [ 00:19:16.427 { 00:19:16.427 "name": "BaseBdev1", 00:19:16.427 "uuid": "5158d83e-ccf7-4f51-8b01-54715eaeb63e", 00:19:16.427 "is_configured": true, 00:19:16.427 "data_offset": 256, 00:19:16.427 "data_size": 7936 00:19:16.427 }, 00:19:16.427 { 00:19:16.427 "name": "BaseBdev2", 00:19:16.427 "uuid": "b0fbd955-83b9-42ee-839f-2ca12727dff8", 00:19:16.427 "is_configured": true, 00:19:16.427 "data_offset": 256, 00:19:16.427 "data_size": 7936 00:19:16.427 } 00:19:16.427 ] 00:19:16.427 }' 00:19:16.427 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:16.427 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.687 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:16.687 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:16.687 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:16.687 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:16.687 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:19:16.687 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:16.687 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:16.687 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:16.687 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.687 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.687 [2024-11-05 16:33:29.733260] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:16.687 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.687 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:16.687 "name": "Existed_Raid", 00:19:16.687 "aliases": [ 00:19:16.687 "dd30121d-3f8e-4fe1-86b8-c4ec2003f974" 00:19:16.687 ], 00:19:16.687 "product_name": "Raid Volume", 00:19:16.687 "block_size": 4096, 00:19:16.687 "num_blocks": 7936, 00:19:16.687 "uuid": "dd30121d-3f8e-4fe1-86b8-c4ec2003f974", 00:19:16.687 "assigned_rate_limits": { 00:19:16.687 "rw_ios_per_sec": 0, 00:19:16.687 "rw_mbytes_per_sec": 0, 00:19:16.687 "r_mbytes_per_sec": 0, 00:19:16.687 "w_mbytes_per_sec": 0 00:19:16.687 }, 00:19:16.687 "claimed": false, 00:19:16.687 "zoned": false, 00:19:16.687 "supported_io_types": { 00:19:16.687 "read": true, 00:19:16.687 "write": true, 00:19:16.687 "unmap": false, 00:19:16.687 "flush": false, 00:19:16.687 "reset": true, 00:19:16.687 "nvme_admin": false, 00:19:16.687 "nvme_io": false, 00:19:16.687 "nvme_io_md": false, 00:19:16.687 "write_zeroes": true, 00:19:16.687 "zcopy": false, 00:19:16.687 "get_zone_info": false, 00:19:16.687 "zone_management": false, 00:19:16.687 "zone_append": false, 00:19:16.687 "compare": false, 00:19:16.687 "compare_and_write": false, 00:19:16.687 "abort": false, 00:19:16.687 "seek_hole": false, 00:19:16.687 "seek_data": false, 00:19:16.687 "copy": false, 00:19:16.687 "nvme_iov_md": false 00:19:16.687 }, 00:19:16.687 "memory_domains": [ 00:19:16.687 { 00:19:16.687 "dma_device_id": "system", 00:19:16.687 "dma_device_type": 1 00:19:16.687 }, 00:19:16.688 { 00:19:16.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:16.688 "dma_device_type": 2 00:19:16.688 }, 00:19:16.688 { 00:19:16.688 "dma_device_id": "system", 00:19:16.688 "dma_device_type": 1 00:19:16.688 }, 00:19:16.688 { 00:19:16.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:16.688 "dma_device_type": 2 00:19:16.688 } 00:19:16.688 ], 00:19:16.688 "driver_specific": { 00:19:16.688 "raid": { 00:19:16.688 "uuid": "dd30121d-3f8e-4fe1-86b8-c4ec2003f974", 00:19:16.688 "strip_size_kb": 0, 00:19:16.688 "state": "online", 00:19:16.688 "raid_level": "raid1", 00:19:16.688 "superblock": true, 00:19:16.688 "num_base_bdevs": 2, 00:19:16.688 "num_base_bdevs_discovered": 2, 00:19:16.688 "num_base_bdevs_operational": 2, 00:19:16.688 "base_bdevs_list": [ 00:19:16.688 { 00:19:16.688 "name": "BaseBdev1", 00:19:16.688 "uuid": "5158d83e-ccf7-4f51-8b01-54715eaeb63e", 00:19:16.688 "is_configured": true, 00:19:16.688 "data_offset": 256, 00:19:16.688 "data_size": 7936 00:19:16.688 }, 00:19:16.688 { 00:19:16.688 "name": "BaseBdev2", 00:19:16.688 "uuid": "b0fbd955-83b9-42ee-839f-2ca12727dff8", 00:19:16.688 "is_configured": true, 00:19:16.688 "data_offset": 256, 00:19:16.688 "data_size": 7936 00:19:16.688 } 00:19:16.688 ] 00:19:16.688 } 00:19:16.688 } 00:19:16.688 }' 00:19:16.688 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:16.947 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:16.948 BaseBdev2' 00:19:16.948 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:16.948 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:19:16.948 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:16.948 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:16.948 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:16.948 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.948 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.948 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.948 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:16.948 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:16.948 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:16.948 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:16.948 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:16.948 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.948 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.948 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.948 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:16.948 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:16.948 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:16.948 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.948 16:33:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.948 [2024-11-05 16:33:29.948624] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:17.207 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.207 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:17.207 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:17.207 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:17.207 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:19:17.207 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:17.207 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:17.207 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:17.207 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:17.207 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:17.207 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:17.207 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:17.207 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:17.207 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:17.207 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:17.207 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:17.207 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:17.207 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.207 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.207 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:17.207 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.207 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:17.207 "name": "Existed_Raid", 00:19:17.207 "uuid": "dd30121d-3f8e-4fe1-86b8-c4ec2003f974", 00:19:17.207 "strip_size_kb": 0, 00:19:17.207 "state": "online", 00:19:17.207 "raid_level": "raid1", 00:19:17.207 "superblock": true, 00:19:17.207 "num_base_bdevs": 2, 00:19:17.207 "num_base_bdevs_discovered": 1, 00:19:17.207 "num_base_bdevs_operational": 1, 00:19:17.207 "base_bdevs_list": [ 00:19:17.207 { 00:19:17.207 "name": null, 00:19:17.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.207 "is_configured": false, 00:19:17.207 "data_offset": 0, 00:19:17.207 "data_size": 7936 00:19:17.207 }, 00:19:17.207 { 00:19:17.208 "name": "BaseBdev2", 00:19:17.208 "uuid": "b0fbd955-83b9-42ee-839f-2ca12727dff8", 00:19:17.208 "is_configured": true, 00:19:17.208 "data_offset": 256, 00:19:17.208 "data_size": 7936 00:19:17.208 } 00:19:17.208 ] 00:19:17.208 }' 00:19:17.208 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:17.208 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:17.467 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:17.467 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:17.467 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.467 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:17.467 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.467 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:17.467 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.467 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:17.467 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:17.467 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:17.467 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.467 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:17.467 [2024-11-05 16:33:30.545531] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:17.467 [2024-11-05 16:33:30.545663] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:17.728 [2024-11-05 16:33:30.640207] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:17.728 [2024-11-05 16:33:30.640265] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:17.728 [2024-11-05 16:33:30.640277] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:17.728 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.728 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:17.728 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:17.728 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.728 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:17.728 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.728 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:17.728 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.728 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:17.728 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:17.728 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:17.728 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86291 00:19:17.728 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # '[' -z 86291 ']' 00:19:17.728 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # kill -0 86291 00:19:17.728 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # uname 00:19:17.728 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:17.728 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86291 00:19:17.728 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:17.728 killing process with pid 86291 00:19:17.728 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:17.728 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86291' 00:19:17.728 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@971 -- # kill 86291 00:19:17.728 [2024-11-05 16:33:30.736263] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:17.728 16:33:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@976 -- # wait 86291 00:19:17.728 [2024-11-05 16:33:30.754145] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:19.109 16:33:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:19:19.109 00:19:19.109 real 0m5.047s 00:19:19.109 user 0m7.272s 00:19:19.109 sys 0m0.864s 00:19:19.109 ************************************ 00:19:19.109 END TEST raid_state_function_test_sb_4k 00:19:19.109 ************************************ 00:19:19.109 16:33:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:19.109 16:33:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.109 16:33:31 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:19:19.109 16:33:31 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:19:19.109 16:33:31 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:19.109 16:33:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:19.109 ************************************ 00:19:19.109 START TEST raid_superblock_test_4k 00:19:19.109 ************************************ 00:19:19.109 16:33:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:19:19.109 16:33:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:19.109 16:33:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:19.109 16:33:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:19.109 16:33:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:19.109 16:33:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:19.109 16:33:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:19.109 16:33:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:19.109 16:33:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:19.109 16:33:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:19.109 16:33:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:19.109 16:33:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:19.109 16:33:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:19.109 16:33:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:19.109 16:33:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:19.109 16:33:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:19.109 16:33:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86538 00:19:19.109 16:33:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:19.109 16:33:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86538 00:19:19.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.109 16:33:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@833 -- # '[' -z 86538 ']' 00:19:19.109 16:33:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.109 16:33:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:19.109 16:33:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.109 16:33:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:19.109 16:33:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.109 [2024-11-05 16:33:32.033362] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:19:19.109 [2024-11-05 16:33:32.033589] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86538 ] 00:19:19.370 [2024-11-05 16:33:32.203642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.370 [2024-11-05 16:33:32.318889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.628 [2024-11-05 16:33:32.523900] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:19.628 [2024-11-05 16:33:32.523995] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:19.889 16:33:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:19.889 16:33:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@866 -- # return 0 00:19:19.889 16:33:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:19.889 16:33:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:19.889 16:33:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:19.889 16:33:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:19.889 16:33:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:19.889 16:33:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:19.889 16:33:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:19.889 16:33:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:19.889 16:33:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:19:19.889 16:33:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.889 16:33:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.889 malloc1 00:19:19.889 16:33:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.889 16:33:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:19.889 16:33:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.889 16:33:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.889 [2024-11-05 16:33:32.930459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:19.889 [2024-11-05 16:33:32.930536] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.889 [2024-11-05 16:33:32.930565] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:19.889 [2024-11-05 16:33:32.930574] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.889 [2024-11-05 16:33:32.932725] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.889 [2024-11-05 16:33:32.932760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:19.889 pt1 00:19:19.889 16:33:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.889 16:33:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:19.889 16:33:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:19.889 16:33:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:19.889 16:33:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:19.889 16:33:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:19.889 16:33:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:19.889 16:33:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:19.889 16:33:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:19.889 16:33:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:19:19.889 16:33:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.889 16:33:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.169 malloc2 00:19:20.169 16:33:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.169 16:33:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:20.169 16:33:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.169 16:33:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.169 [2024-11-05 16:33:32.989153] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:20.169 [2024-11-05 16:33:32.989263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.169 [2024-11-05 16:33:32.989305] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:20.169 [2024-11-05 16:33:32.989336] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.169 [2024-11-05 16:33:32.991483] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.169 [2024-11-05 16:33:32.991565] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:20.169 pt2 00:19:20.169 16:33:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.169 16:33:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:20.169 16:33:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:20.169 16:33:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:20.169 16:33:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.169 16:33:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.169 [2024-11-05 16:33:33.001187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:20.169 [2024-11-05 16:33:33.003029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:20.169 [2024-11-05 16:33:33.003257] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:20.169 [2024-11-05 16:33:33.003308] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:20.169 [2024-11-05 16:33:33.003588] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:20.169 [2024-11-05 16:33:33.003780] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:20.169 [2024-11-05 16:33:33.003827] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:20.169 [2024-11-05 16:33:33.004028] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:20.169 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.169 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:20.169 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:20.169 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:20.169 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:20.169 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:20.169 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:20.169 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.169 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.169 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.169 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.169 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.169 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.169 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.169 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.169 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.169 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.169 "name": "raid_bdev1", 00:19:20.169 "uuid": "1a0388b6-b30d-4f76-8ae7-0cdcc03586c6", 00:19:20.169 "strip_size_kb": 0, 00:19:20.169 "state": "online", 00:19:20.169 "raid_level": "raid1", 00:19:20.169 "superblock": true, 00:19:20.169 "num_base_bdevs": 2, 00:19:20.169 "num_base_bdevs_discovered": 2, 00:19:20.169 "num_base_bdevs_operational": 2, 00:19:20.169 "base_bdevs_list": [ 00:19:20.169 { 00:19:20.169 "name": "pt1", 00:19:20.169 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:20.169 "is_configured": true, 00:19:20.169 "data_offset": 256, 00:19:20.169 "data_size": 7936 00:19:20.169 }, 00:19:20.169 { 00:19:20.169 "name": "pt2", 00:19:20.169 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:20.169 "is_configured": true, 00:19:20.169 "data_offset": 256, 00:19:20.169 "data_size": 7936 00:19:20.169 } 00:19:20.169 ] 00:19:20.169 }' 00:19:20.169 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.169 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.442 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:20.442 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:20.442 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:20.442 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:20.442 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:19:20.442 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:20.442 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:20.442 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:20.442 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.442 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.442 [2024-11-05 16:33:33.476672] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:20.442 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.442 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:20.442 "name": "raid_bdev1", 00:19:20.442 "aliases": [ 00:19:20.442 "1a0388b6-b30d-4f76-8ae7-0cdcc03586c6" 00:19:20.442 ], 00:19:20.442 "product_name": "Raid Volume", 00:19:20.442 "block_size": 4096, 00:19:20.442 "num_blocks": 7936, 00:19:20.442 "uuid": "1a0388b6-b30d-4f76-8ae7-0cdcc03586c6", 00:19:20.442 "assigned_rate_limits": { 00:19:20.442 "rw_ios_per_sec": 0, 00:19:20.442 "rw_mbytes_per_sec": 0, 00:19:20.442 "r_mbytes_per_sec": 0, 00:19:20.442 "w_mbytes_per_sec": 0 00:19:20.442 }, 00:19:20.442 "claimed": false, 00:19:20.442 "zoned": false, 00:19:20.442 "supported_io_types": { 00:19:20.442 "read": true, 00:19:20.442 "write": true, 00:19:20.442 "unmap": false, 00:19:20.442 "flush": false, 00:19:20.442 "reset": true, 00:19:20.442 "nvme_admin": false, 00:19:20.442 "nvme_io": false, 00:19:20.442 "nvme_io_md": false, 00:19:20.442 "write_zeroes": true, 00:19:20.442 "zcopy": false, 00:19:20.442 "get_zone_info": false, 00:19:20.442 "zone_management": false, 00:19:20.442 "zone_append": false, 00:19:20.442 "compare": false, 00:19:20.442 "compare_and_write": false, 00:19:20.442 "abort": false, 00:19:20.442 "seek_hole": false, 00:19:20.442 "seek_data": false, 00:19:20.442 "copy": false, 00:19:20.442 "nvme_iov_md": false 00:19:20.442 }, 00:19:20.442 "memory_domains": [ 00:19:20.442 { 00:19:20.442 "dma_device_id": "system", 00:19:20.442 "dma_device_type": 1 00:19:20.442 }, 00:19:20.442 { 00:19:20.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:20.442 "dma_device_type": 2 00:19:20.442 }, 00:19:20.442 { 00:19:20.442 "dma_device_id": "system", 00:19:20.442 "dma_device_type": 1 00:19:20.442 }, 00:19:20.442 { 00:19:20.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:20.442 "dma_device_type": 2 00:19:20.442 } 00:19:20.442 ], 00:19:20.442 "driver_specific": { 00:19:20.442 "raid": { 00:19:20.442 "uuid": "1a0388b6-b30d-4f76-8ae7-0cdcc03586c6", 00:19:20.442 "strip_size_kb": 0, 00:19:20.442 "state": "online", 00:19:20.442 "raid_level": "raid1", 00:19:20.442 "superblock": true, 00:19:20.442 "num_base_bdevs": 2, 00:19:20.442 "num_base_bdevs_discovered": 2, 00:19:20.442 "num_base_bdevs_operational": 2, 00:19:20.442 "base_bdevs_list": [ 00:19:20.442 { 00:19:20.442 "name": "pt1", 00:19:20.442 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:20.442 "is_configured": true, 00:19:20.442 "data_offset": 256, 00:19:20.442 "data_size": 7936 00:19:20.442 }, 00:19:20.442 { 00:19:20.442 "name": "pt2", 00:19:20.442 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:20.442 "is_configured": true, 00:19:20.442 "data_offset": 256, 00:19:20.442 "data_size": 7936 00:19:20.442 } 00:19:20.442 ] 00:19:20.442 } 00:19:20.442 } 00:19:20.442 }' 00:19:20.442 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:20.702 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:20.702 pt2' 00:19:20.702 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:20.702 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:19:20.703 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:20.703 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:20.703 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.703 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.703 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:20.703 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.703 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:20.703 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:20.703 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:20.703 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:20.703 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.703 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.703 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:20.703 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.703 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:20.703 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:20.703 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:20.703 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:20.703 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.703 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.703 [2024-11-05 16:33:33.712365] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:20.703 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.703 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1a0388b6-b30d-4f76-8ae7-0cdcc03586c6 00:19:20.703 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 1a0388b6-b30d-4f76-8ae7-0cdcc03586c6 ']' 00:19:20.703 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:20.703 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.703 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.703 [2024-11-05 16:33:33.755942] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:20.703 [2024-11-05 16:33:33.756015] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:20.703 [2024-11-05 16:33:33.756178] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:20.703 [2024-11-05 16:33:33.756277] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:20.703 [2024-11-05 16:33:33.756332] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:20.703 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.703 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:20.703 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.703 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.703 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.703 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.963 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:20.963 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:20.963 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:20.963 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:20.963 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.963 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.963 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.963 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:20.963 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:20.963 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.963 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.963 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.963 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:20.963 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:20.963 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.963 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.963 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.963 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:20.963 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:20.963 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:19:20.963 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:20.963 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:20.963 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:20.963 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:20.963 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:20.963 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:20.963 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.963 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.963 [2024-11-05 16:33:33.875753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:20.963 [2024-11-05 16:33:33.877861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:20.963 [2024-11-05 16:33:33.877941] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:20.963 [2024-11-05 16:33:33.878003] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:20.963 [2024-11-05 16:33:33.878020] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:20.963 [2024-11-05 16:33:33.878031] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:20.963 request: 00:19:20.963 { 00:19:20.963 "name": "raid_bdev1", 00:19:20.963 "raid_level": "raid1", 00:19:20.963 "base_bdevs": [ 00:19:20.963 "malloc1", 00:19:20.963 "malloc2" 00:19:20.963 ], 00:19:20.963 "superblock": false, 00:19:20.963 "method": "bdev_raid_create", 00:19:20.963 "req_id": 1 00:19:20.963 } 00:19:20.963 Got JSON-RPC error response 00:19:20.963 response: 00:19:20.963 { 00:19:20.963 "code": -17, 00:19:20.963 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:20.963 } 00:19:20.963 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:20.963 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:19:20.963 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:20.963 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:20.963 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:20.963 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.964 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:20.964 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.964 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.964 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.964 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:20.964 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:20.964 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:20.964 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.964 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.964 [2024-11-05 16:33:33.939614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:20.964 [2024-11-05 16:33:33.939720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.964 [2024-11-05 16:33:33.939757] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:20.964 [2024-11-05 16:33:33.939788] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.964 [2024-11-05 16:33:33.942127] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.964 [2024-11-05 16:33:33.942216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:20.964 [2024-11-05 16:33:33.942362] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:20.964 [2024-11-05 16:33:33.942488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:20.964 pt1 00:19:20.964 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.964 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:20.964 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:20.964 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:20.964 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:20.964 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:20.964 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:20.964 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.964 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.964 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.964 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.964 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.964 16:33:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.964 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.964 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.964 16:33:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.964 16:33:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.964 "name": "raid_bdev1", 00:19:20.964 "uuid": "1a0388b6-b30d-4f76-8ae7-0cdcc03586c6", 00:19:20.964 "strip_size_kb": 0, 00:19:20.964 "state": "configuring", 00:19:20.964 "raid_level": "raid1", 00:19:20.964 "superblock": true, 00:19:20.964 "num_base_bdevs": 2, 00:19:20.964 "num_base_bdevs_discovered": 1, 00:19:20.964 "num_base_bdevs_operational": 2, 00:19:20.964 "base_bdevs_list": [ 00:19:20.964 { 00:19:20.964 "name": "pt1", 00:19:20.964 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:20.964 "is_configured": true, 00:19:20.964 "data_offset": 256, 00:19:20.964 "data_size": 7936 00:19:20.964 }, 00:19:20.964 { 00:19:20.964 "name": null, 00:19:20.964 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:20.964 "is_configured": false, 00:19:20.964 "data_offset": 256, 00:19:20.964 "data_size": 7936 00:19:20.964 } 00:19:20.964 ] 00:19:20.964 }' 00:19:20.964 16:33:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.964 16:33:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:21.534 16:33:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:21.534 16:33:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:21.534 16:33:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:21.534 16:33:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:21.534 16:33:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.534 16:33:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:21.534 [2024-11-05 16:33:34.434767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:21.534 [2024-11-05 16:33:34.434899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:21.534 [2024-11-05 16:33:34.434931] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:21.534 [2024-11-05 16:33:34.434942] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:21.534 [2024-11-05 16:33:34.435482] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:21.534 [2024-11-05 16:33:34.435507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:21.534 [2024-11-05 16:33:34.435623] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:21.534 [2024-11-05 16:33:34.435654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:21.534 [2024-11-05 16:33:34.435793] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:21.534 [2024-11-05 16:33:34.435806] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:21.534 [2024-11-05 16:33:34.436070] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:21.534 [2024-11-05 16:33:34.436255] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:21.534 [2024-11-05 16:33:34.436268] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:21.534 [2024-11-05 16:33:34.436423] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:21.534 pt2 00:19:21.534 16:33:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.534 16:33:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:21.534 16:33:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:21.534 16:33:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:21.534 16:33:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:21.534 16:33:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:21.534 16:33:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:21.534 16:33:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:21.534 16:33:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:21.534 16:33:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.534 16:33:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.534 16:33:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.534 16:33:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.534 16:33:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.534 16:33:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.534 16:33:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.534 16:33:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:21.534 16:33:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.534 16:33:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:21.534 "name": "raid_bdev1", 00:19:21.534 "uuid": "1a0388b6-b30d-4f76-8ae7-0cdcc03586c6", 00:19:21.534 "strip_size_kb": 0, 00:19:21.534 "state": "online", 00:19:21.534 "raid_level": "raid1", 00:19:21.534 "superblock": true, 00:19:21.534 "num_base_bdevs": 2, 00:19:21.534 "num_base_bdevs_discovered": 2, 00:19:21.534 "num_base_bdevs_operational": 2, 00:19:21.534 "base_bdevs_list": [ 00:19:21.534 { 00:19:21.534 "name": "pt1", 00:19:21.534 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:21.534 "is_configured": true, 00:19:21.534 "data_offset": 256, 00:19:21.534 "data_size": 7936 00:19:21.534 }, 00:19:21.534 { 00:19:21.534 "name": "pt2", 00:19:21.534 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:21.534 "is_configured": true, 00:19:21.534 "data_offset": 256, 00:19:21.534 "data_size": 7936 00:19:21.534 } 00:19:21.534 ] 00:19:21.534 }' 00:19:21.534 16:33:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:21.534 16:33:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:21.794 16:33:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:21.794 16:33:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:21.794 16:33:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:21.794 16:33:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:21.794 16:33:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:19:21.794 16:33:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:21.794 16:33:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:21.794 16:33:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:21.794 16:33:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.794 16:33:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:21.794 [2024-11-05 16:33:34.874316] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:22.054 16:33:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.054 16:33:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:22.054 "name": "raid_bdev1", 00:19:22.054 "aliases": [ 00:19:22.054 "1a0388b6-b30d-4f76-8ae7-0cdcc03586c6" 00:19:22.054 ], 00:19:22.054 "product_name": "Raid Volume", 00:19:22.054 "block_size": 4096, 00:19:22.054 "num_blocks": 7936, 00:19:22.054 "uuid": "1a0388b6-b30d-4f76-8ae7-0cdcc03586c6", 00:19:22.054 "assigned_rate_limits": { 00:19:22.054 "rw_ios_per_sec": 0, 00:19:22.054 "rw_mbytes_per_sec": 0, 00:19:22.054 "r_mbytes_per_sec": 0, 00:19:22.054 "w_mbytes_per_sec": 0 00:19:22.054 }, 00:19:22.054 "claimed": false, 00:19:22.054 "zoned": false, 00:19:22.054 "supported_io_types": { 00:19:22.054 "read": true, 00:19:22.054 "write": true, 00:19:22.054 "unmap": false, 00:19:22.054 "flush": false, 00:19:22.054 "reset": true, 00:19:22.054 "nvme_admin": false, 00:19:22.054 "nvme_io": false, 00:19:22.054 "nvme_io_md": false, 00:19:22.054 "write_zeroes": true, 00:19:22.054 "zcopy": false, 00:19:22.054 "get_zone_info": false, 00:19:22.054 "zone_management": false, 00:19:22.054 "zone_append": false, 00:19:22.054 "compare": false, 00:19:22.054 "compare_and_write": false, 00:19:22.054 "abort": false, 00:19:22.054 "seek_hole": false, 00:19:22.054 "seek_data": false, 00:19:22.054 "copy": false, 00:19:22.054 "nvme_iov_md": false 00:19:22.054 }, 00:19:22.054 "memory_domains": [ 00:19:22.054 { 00:19:22.054 "dma_device_id": "system", 00:19:22.054 "dma_device_type": 1 00:19:22.054 }, 00:19:22.054 { 00:19:22.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:22.054 "dma_device_type": 2 00:19:22.054 }, 00:19:22.054 { 00:19:22.054 "dma_device_id": "system", 00:19:22.054 "dma_device_type": 1 00:19:22.054 }, 00:19:22.054 { 00:19:22.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:22.054 "dma_device_type": 2 00:19:22.054 } 00:19:22.054 ], 00:19:22.054 "driver_specific": { 00:19:22.054 "raid": { 00:19:22.054 "uuid": "1a0388b6-b30d-4f76-8ae7-0cdcc03586c6", 00:19:22.054 "strip_size_kb": 0, 00:19:22.054 "state": "online", 00:19:22.054 "raid_level": "raid1", 00:19:22.054 "superblock": true, 00:19:22.054 "num_base_bdevs": 2, 00:19:22.054 "num_base_bdevs_discovered": 2, 00:19:22.054 "num_base_bdevs_operational": 2, 00:19:22.054 "base_bdevs_list": [ 00:19:22.054 { 00:19:22.054 "name": "pt1", 00:19:22.054 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:22.054 "is_configured": true, 00:19:22.054 "data_offset": 256, 00:19:22.054 "data_size": 7936 00:19:22.055 }, 00:19:22.055 { 00:19:22.055 "name": "pt2", 00:19:22.055 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:22.055 "is_configured": true, 00:19:22.055 "data_offset": 256, 00:19:22.055 "data_size": 7936 00:19:22.055 } 00:19:22.055 ] 00:19:22.055 } 00:19:22.055 } 00:19:22.055 }' 00:19:22.055 16:33:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:22.055 16:33:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:22.055 pt2' 00:19:22.055 16:33:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:22.055 16:33:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:19:22.055 16:33:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:22.055 16:33:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:22.055 16:33:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.055 16:33:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.055 16:33:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:22.055 16:33:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.055 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:22.055 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:22.055 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:22.055 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:22.055 16:33:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.055 16:33:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.055 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:22.055 16:33:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.055 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:22.055 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:22.055 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:22.055 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:22.055 16:33:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.055 16:33:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.055 [2024-11-05 16:33:35.097825] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:22.055 16:33:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.055 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 1a0388b6-b30d-4f76-8ae7-0cdcc03586c6 '!=' 1a0388b6-b30d-4f76-8ae7-0cdcc03586c6 ']' 00:19:22.055 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:22.055 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:22.055 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:19:22.055 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:22.055 16:33:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.055 16:33:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.315 [2024-11-05 16:33:35.145577] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:22.315 16:33:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.315 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:22.315 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:22.315 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:22.315 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:22.315 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:22.315 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:22.315 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.315 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.315 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.315 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.315 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.315 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.315 16:33:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.315 16:33:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.315 16:33:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.315 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.315 "name": "raid_bdev1", 00:19:22.315 "uuid": "1a0388b6-b30d-4f76-8ae7-0cdcc03586c6", 00:19:22.315 "strip_size_kb": 0, 00:19:22.315 "state": "online", 00:19:22.315 "raid_level": "raid1", 00:19:22.315 "superblock": true, 00:19:22.315 "num_base_bdevs": 2, 00:19:22.315 "num_base_bdevs_discovered": 1, 00:19:22.315 "num_base_bdevs_operational": 1, 00:19:22.315 "base_bdevs_list": [ 00:19:22.315 { 00:19:22.315 "name": null, 00:19:22.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.315 "is_configured": false, 00:19:22.315 "data_offset": 0, 00:19:22.315 "data_size": 7936 00:19:22.315 }, 00:19:22.315 { 00:19:22.315 "name": "pt2", 00:19:22.315 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:22.315 "is_configured": true, 00:19:22.315 "data_offset": 256, 00:19:22.315 "data_size": 7936 00:19:22.315 } 00:19:22.315 ] 00:19:22.315 }' 00:19:22.315 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.315 16:33:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.575 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:22.575 16:33:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.575 16:33:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.575 [2024-11-05 16:33:35.564806] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:22.575 [2024-11-05 16:33:35.564893] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:22.575 [2024-11-05 16:33:35.564983] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:22.575 [2024-11-05 16:33:35.565030] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:22.575 [2024-11-05 16:33:35.565042] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:22.575 16:33:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.575 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.575 16:33:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.575 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:22.575 16:33:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.575 16:33:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.575 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:22.575 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:22.575 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:22.575 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:22.575 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:22.575 16:33:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.575 16:33:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.575 16:33:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.575 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:22.575 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:22.575 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:22.575 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:22.575 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:19:22.575 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:22.575 16:33:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.575 16:33:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.575 [2024-11-05 16:33:35.636666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:22.575 [2024-11-05 16:33:35.636731] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:22.575 [2024-11-05 16:33:35.636752] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:22.575 [2024-11-05 16:33:35.636763] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:22.575 [2024-11-05 16:33:35.639114] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:22.575 [2024-11-05 16:33:35.639219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:22.575 [2024-11-05 16:33:35.639322] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:22.575 [2024-11-05 16:33:35.639377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:22.575 [2024-11-05 16:33:35.639508] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:22.575 [2024-11-05 16:33:35.639523] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:22.575 [2024-11-05 16:33:35.639807] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:22.575 [2024-11-05 16:33:35.639976] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:22.575 [2024-11-05 16:33:35.639986] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:22.575 [2024-11-05 16:33:35.640115] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:22.575 pt2 00:19:22.575 16:33:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.575 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:22.575 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:22.575 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:22.575 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:22.575 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:22.575 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:22.575 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.575 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.575 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.575 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.575 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.575 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.575 16:33:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.575 16:33:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.835 16:33:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.835 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.835 "name": "raid_bdev1", 00:19:22.835 "uuid": "1a0388b6-b30d-4f76-8ae7-0cdcc03586c6", 00:19:22.835 "strip_size_kb": 0, 00:19:22.835 "state": "online", 00:19:22.835 "raid_level": "raid1", 00:19:22.835 "superblock": true, 00:19:22.835 "num_base_bdevs": 2, 00:19:22.835 "num_base_bdevs_discovered": 1, 00:19:22.835 "num_base_bdevs_operational": 1, 00:19:22.835 "base_bdevs_list": [ 00:19:22.835 { 00:19:22.835 "name": null, 00:19:22.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.835 "is_configured": false, 00:19:22.835 "data_offset": 256, 00:19:22.835 "data_size": 7936 00:19:22.835 }, 00:19:22.835 { 00:19:22.835 "name": "pt2", 00:19:22.835 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:22.835 "is_configured": true, 00:19:22.835 "data_offset": 256, 00:19:22.835 "data_size": 7936 00:19:22.835 } 00:19:22.835 ] 00:19:22.835 }' 00:19:22.835 16:33:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.835 16:33:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:23.099 16:33:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:23.099 16:33:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.099 16:33:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:23.099 [2024-11-05 16:33:36.123881] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:23.099 [2024-11-05 16:33:36.123971] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:23.099 [2024-11-05 16:33:36.124073] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:23.099 [2024-11-05 16:33:36.124161] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:23.099 [2024-11-05 16:33:36.124212] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:23.100 16:33:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.100 16:33:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.100 16:33:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:23.100 16:33:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.100 16:33:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:23.100 16:33:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.100 16:33:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:23.100 16:33:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:23.100 16:33:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:23.100 16:33:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:23.100 16:33:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.100 16:33:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:23.100 [2024-11-05 16:33:36.183795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:23.100 [2024-11-05 16:33:36.183899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.100 [2024-11-05 16:33:36.183937] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:23.100 [2024-11-05 16:33:36.183964] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.100 [2024-11-05 16:33:36.186345] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.100 [2024-11-05 16:33:36.186431] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:23.100 [2024-11-05 16:33:36.186567] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:23.100 [2024-11-05 16:33:36.186656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:23.100 [2024-11-05 16:33:36.186851] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:23.100 [2024-11-05 16:33:36.186910] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:23.100 [2024-11-05 16:33:36.186984] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:23.100 [2024-11-05 16:33:36.187118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:23.100 [2024-11-05 16:33:36.187235] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:23.100 [2024-11-05 16:33:36.187271] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:23.100 [2024-11-05 16:33:36.187558] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:23.100 [2024-11-05 16:33:36.187740] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:23.100 [2024-11-05 16:33:36.187785] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:23.100 [2024-11-05 16:33:36.188013] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:23.100 pt1 00:19:23.360 16:33:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.360 16:33:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:23.360 16:33:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:23.360 16:33:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:23.360 16:33:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:23.360 16:33:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:23.360 16:33:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:23.360 16:33:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:23.360 16:33:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.360 16:33:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.360 16:33:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.360 16:33:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.360 16:33:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.360 16:33:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.360 16:33:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.360 16:33:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:23.360 16:33:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.360 16:33:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.360 "name": "raid_bdev1", 00:19:23.360 "uuid": "1a0388b6-b30d-4f76-8ae7-0cdcc03586c6", 00:19:23.360 "strip_size_kb": 0, 00:19:23.360 "state": "online", 00:19:23.360 "raid_level": "raid1", 00:19:23.360 "superblock": true, 00:19:23.360 "num_base_bdevs": 2, 00:19:23.360 "num_base_bdevs_discovered": 1, 00:19:23.360 "num_base_bdevs_operational": 1, 00:19:23.360 "base_bdevs_list": [ 00:19:23.360 { 00:19:23.360 "name": null, 00:19:23.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.360 "is_configured": false, 00:19:23.360 "data_offset": 256, 00:19:23.360 "data_size": 7936 00:19:23.360 }, 00:19:23.360 { 00:19:23.360 "name": "pt2", 00:19:23.360 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:23.360 "is_configured": true, 00:19:23.360 "data_offset": 256, 00:19:23.360 "data_size": 7936 00:19:23.360 } 00:19:23.360 ] 00:19:23.360 }' 00:19:23.360 16:33:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.360 16:33:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:23.619 16:33:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:23.619 16:33:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.619 16:33:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:23.619 16:33:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:23.619 16:33:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.619 16:33:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:23.619 16:33:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:23.619 16:33:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.619 16:33:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:23.619 16:33:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:23.619 [2024-11-05 16:33:36.687291] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:23.619 16:33:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.878 16:33:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 1a0388b6-b30d-4f76-8ae7-0cdcc03586c6 '!=' 1a0388b6-b30d-4f76-8ae7-0cdcc03586c6 ']' 00:19:23.879 16:33:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86538 00:19:23.879 16:33:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # '[' -z 86538 ']' 00:19:23.879 16:33:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # kill -0 86538 00:19:23.879 16:33:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # uname 00:19:23.879 16:33:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:23.879 16:33:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86538 00:19:23.879 killing process with pid 86538 00:19:23.879 16:33:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:23.879 16:33:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:23.879 16:33:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86538' 00:19:23.879 16:33:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@971 -- # kill 86538 00:19:23.879 [2024-11-05 16:33:36.764505] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:23.879 [2024-11-05 16:33:36.764602] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:23.879 [2024-11-05 16:33:36.764650] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:23.879 [2024-11-05 16:33:36.764664] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:23.879 16:33:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@976 -- # wait 86538 00:19:24.138 [2024-11-05 16:33:36.972775] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:25.076 16:33:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:19:25.076 00:19:25.076 real 0m6.123s 00:19:25.076 user 0m9.290s 00:19:25.076 sys 0m1.105s 00:19:25.076 16:33:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:25.076 ************************************ 00:19:25.076 END TEST raid_superblock_test_4k 00:19:25.076 ************************************ 00:19:25.076 16:33:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.076 16:33:38 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:19:25.076 16:33:38 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:19:25.076 16:33:38 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:19:25.076 16:33:38 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:25.076 16:33:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:25.076 ************************************ 00:19:25.076 START TEST raid_rebuild_test_sb_4k 00:19:25.076 ************************************ 00:19:25.076 16:33:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:19:25.076 16:33:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:25.076 16:33:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:25.076 16:33:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:25.076 16:33:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:25.076 16:33:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:25.076 16:33:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:25.076 16:33:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:25.076 16:33:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:25.076 16:33:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:25.076 16:33:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:25.076 16:33:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:25.076 16:33:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:25.076 16:33:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:25.076 16:33:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:25.076 16:33:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:25.076 16:33:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:25.076 16:33:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:25.076 16:33:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:25.076 16:33:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:25.076 16:33:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:25.076 16:33:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:25.076 16:33:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:25.076 16:33:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:25.076 16:33:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:25.076 16:33:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86862 00:19:25.076 16:33:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:25.076 16:33:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86862 00:19:25.076 16:33:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@833 -- # '[' -z 86862 ']' 00:19:25.076 16:33:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.076 16:33:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:25.076 16:33:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.076 16:33:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:25.076 16:33:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.336 [2024-11-05 16:33:38.238278] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:19:25.336 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:25.336 Zero copy mechanism will not be used. 00:19:25.336 [2024-11-05 16:33:38.238481] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86862 ] 00:19:25.336 [2024-11-05 16:33:38.411058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.595 [2024-11-05 16:33:38.523754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.855 [2024-11-05 16:33:38.720086] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:25.855 [2024-11-05 16:33:38.720231] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:26.115 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:26.115 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # return 0 00:19:26.115 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:26.115 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:19:26.115 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.115 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.115 BaseBdev1_malloc 00:19:26.115 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.115 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:26.115 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.115 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.115 [2024-11-05 16:33:39.131752] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:26.115 [2024-11-05 16:33:39.131857] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:26.115 [2024-11-05 16:33:39.131882] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:26.115 [2024-11-05 16:33:39.131893] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:26.115 [2024-11-05 16:33:39.134040] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:26.115 [2024-11-05 16:33:39.134077] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:26.115 BaseBdev1 00:19:26.115 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.115 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:26.115 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:19:26.115 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.115 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.115 BaseBdev2_malloc 00:19:26.115 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.115 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:26.115 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.115 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.115 [2024-11-05 16:33:39.185355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:26.115 [2024-11-05 16:33:39.185415] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:26.115 [2024-11-05 16:33:39.185433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:26.115 [2024-11-05 16:33:39.185445] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:26.115 [2024-11-05 16:33:39.187475] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:26.115 [2024-11-05 16:33:39.187515] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:26.115 BaseBdev2 00:19:26.115 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.115 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:19:26.115 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.115 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.381 spare_malloc 00:19:26.381 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.381 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:26.381 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.381 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.381 spare_delay 00:19:26.381 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.381 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:26.381 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.381 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.381 [2024-11-05 16:33:39.261486] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:26.381 [2024-11-05 16:33:39.261572] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:26.381 [2024-11-05 16:33:39.261596] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:26.381 [2024-11-05 16:33:39.261607] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:26.381 [2024-11-05 16:33:39.263711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:26.381 [2024-11-05 16:33:39.263832] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:26.381 spare 00:19:26.381 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.381 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:26.381 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.381 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.381 [2024-11-05 16:33:39.269561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:26.381 [2024-11-05 16:33:39.271419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:26.381 [2024-11-05 16:33:39.271670] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:26.381 [2024-11-05 16:33:39.271693] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:26.381 [2024-11-05 16:33:39.271946] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:26.381 [2024-11-05 16:33:39.272119] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:26.381 [2024-11-05 16:33:39.272141] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:26.381 [2024-11-05 16:33:39.272322] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:26.381 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.381 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:26.381 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:26.381 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:26.381 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:26.381 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:26.381 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:26.381 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:26.381 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:26.381 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:26.381 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:26.381 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.381 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.381 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.381 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.381 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.381 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:26.381 "name": "raid_bdev1", 00:19:26.382 "uuid": "21bb3fb7-5469-4147-9b31-b14016a7134c", 00:19:26.382 "strip_size_kb": 0, 00:19:26.382 "state": "online", 00:19:26.382 "raid_level": "raid1", 00:19:26.382 "superblock": true, 00:19:26.382 "num_base_bdevs": 2, 00:19:26.382 "num_base_bdevs_discovered": 2, 00:19:26.382 "num_base_bdevs_operational": 2, 00:19:26.382 "base_bdevs_list": [ 00:19:26.382 { 00:19:26.382 "name": "BaseBdev1", 00:19:26.382 "uuid": "48a2ce8c-27bc-54dc-a369-9b21f2bb147f", 00:19:26.382 "is_configured": true, 00:19:26.382 "data_offset": 256, 00:19:26.382 "data_size": 7936 00:19:26.382 }, 00:19:26.382 { 00:19:26.382 "name": "BaseBdev2", 00:19:26.382 "uuid": "74d6c793-96a7-5372-b705-71f2e7b632ab", 00:19:26.382 "is_configured": true, 00:19:26.382 "data_offset": 256, 00:19:26.382 "data_size": 7936 00:19:26.382 } 00:19:26.382 ] 00:19:26.382 }' 00:19:26.382 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:26.382 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.651 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:26.651 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.651 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.651 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:26.651 [2024-11-05 16:33:39.737069] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:26.911 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.911 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:26.911 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.911 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.911 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.911 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:26.911 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.911 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:26.911 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:26.911 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:26.911 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:26.911 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:26.911 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:26.911 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:26.911 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:26.911 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:26.911 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:26.911 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:19:26.911 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:26.911 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:26.911 16:33:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:27.170 [2024-11-05 16:33:40.020322] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:27.170 /dev/nbd0 00:19:27.170 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:27.170 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:27.170 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:27.170 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:19:27.170 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:27.170 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:27.170 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:27.170 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:19:27.170 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:27.170 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:27.170 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:27.170 1+0 records in 00:19:27.170 1+0 records out 00:19:27.170 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249287 s, 16.4 MB/s 00:19:27.170 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:27.170 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:19:27.170 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:27.170 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:27.170 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:19:27.170 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:27.170 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:27.170 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:27.170 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:27.170 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:19:27.737 7936+0 records in 00:19:27.737 7936+0 records out 00:19:27.737 32505856 bytes (33 MB, 31 MiB) copied, 0.630953 s, 51.5 MB/s 00:19:27.737 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:27.737 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:27.737 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:27.737 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:27.737 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:19:27.737 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:27.737 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:27.997 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:27.997 [2024-11-05 16:33:40.931705] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:27.997 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:27.997 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:27.997 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:27.997 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:27.997 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:27.997 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:27.997 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:27.997 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:27.997 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.997 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:27.997 [2024-11-05 16:33:40.949008] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:27.997 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.997 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:27.997 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:27.997 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:27.997 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:27.997 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:27.997 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:27.997 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:27.997 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:27.997 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:27.997 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:27.997 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.997 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.997 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.997 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:27.997 16:33:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.997 16:33:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:27.997 "name": "raid_bdev1", 00:19:27.997 "uuid": "21bb3fb7-5469-4147-9b31-b14016a7134c", 00:19:27.997 "strip_size_kb": 0, 00:19:27.997 "state": "online", 00:19:27.997 "raid_level": "raid1", 00:19:27.997 "superblock": true, 00:19:27.997 "num_base_bdevs": 2, 00:19:27.997 "num_base_bdevs_discovered": 1, 00:19:27.997 "num_base_bdevs_operational": 1, 00:19:27.997 "base_bdevs_list": [ 00:19:27.997 { 00:19:27.997 "name": null, 00:19:27.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.997 "is_configured": false, 00:19:27.997 "data_offset": 0, 00:19:27.997 "data_size": 7936 00:19:27.997 }, 00:19:27.997 { 00:19:27.997 "name": "BaseBdev2", 00:19:27.997 "uuid": "74d6c793-96a7-5372-b705-71f2e7b632ab", 00:19:27.997 "is_configured": true, 00:19:27.997 "data_offset": 256, 00:19:27.997 "data_size": 7936 00:19:27.997 } 00:19:27.997 ] 00:19:27.997 }' 00:19:27.997 16:33:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:27.997 16:33:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:28.565 16:33:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:28.565 16:33:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.565 16:33:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:28.565 [2024-11-05 16:33:41.360321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:28.565 [2024-11-05 16:33:41.377299] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:19:28.565 16:33:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.565 16:33:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:28.565 [2024-11-05 16:33:41.379059] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:29.504 16:33:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:29.504 16:33:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:29.504 16:33:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:29.504 16:33:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:29.504 16:33:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:29.504 16:33:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.504 16:33:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.504 16:33:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.504 16:33:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:29.504 16:33:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.504 16:33:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:29.504 "name": "raid_bdev1", 00:19:29.504 "uuid": "21bb3fb7-5469-4147-9b31-b14016a7134c", 00:19:29.504 "strip_size_kb": 0, 00:19:29.504 "state": "online", 00:19:29.504 "raid_level": "raid1", 00:19:29.504 "superblock": true, 00:19:29.504 "num_base_bdevs": 2, 00:19:29.504 "num_base_bdevs_discovered": 2, 00:19:29.504 "num_base_bdevs_operational": 2, 00:19:29.504 "process": { 00:19:29.504 "type": "rebuild", 00:19:29.504 "target": "spare", 00:19:29.504 "progress": { 00:19:29.504 "blocks": 2560, 00:19:29.504 "percent": 32 00:19:29.504 } 00:19:29.504 }, 00:19:29.504 "base_bdevs_list": [ 00:19:29.504 { 00:19:29.504 "name": "spare", 00:19:29.504 "uuid": "1cc20a81-166d-54d4-9159-c71ece4f585e", 00:19:29.504 "is_configured": true, 00:19:29.504 "data_offset": 256, 00:19:29.504 "data_size": 7936 00:19:29.504 }, 00:19:29.504 { 00:19:29.504 "name": "BaseBdev2", 00:19:29.504 "uuid": "74d6c793-96a7-5372-b705-71f2e7b632ab", 00:19:29.504 "is_configured": true, 00:19:29.504 "data_offset": 256, 00:19:29.504 "data_size": 7936 00:19:29.504 } 00:19:29.504 ] 00:19:29.504 }' 00:19:29.504 16:33:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:29.504 16:33:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:29.504 16:33:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:29.504 16:33:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:29.504 16:33:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:29.504 16:33:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.504 16:33:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:29.504 [2024-11-05 16:33:42.534398] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:29.504 [2024-11-05 16:33:42.584464] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:29.504 [2024-11-05 16:33:42.584538] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:29.504 [2024-11-05 16:33:42.584553] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:29.504 [2024-11-05 16:33:42.584563] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:29.764 16:33:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.764 16:33:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:29.764 16:33:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:29.764 16:33:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:29.764 16:33:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:29.764 16:33:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:29.764 16:33:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:29.764 16:33:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.764 16:33:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.764 16:33:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.764 16:33:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.764 16:33:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.764 16:33:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.764 16:33:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.764 16:33:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:29.764 16:33:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.764 16:33:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.764 "name": "raid_bdev1", 00:19:29.764 "uuid": "21bb3fb7-5469-4147-9b31-b14016a7134c", 00:19:29.764 "strip_size_kb": 0, 00:19:29.764 "state": "online", 00:19:29.764 "raid_level": "raid1", 00:19:29.764 "superblock": true, 00:19:29.764 "num_base_bdevs": 2, 00:19:29.764 "num_base_bdevs_discovered": 1, 00:19:29.764 "num_base_bdevs_operational": 1, 00:19:29.764 "base_bdevs_list": [ 00:19:29.764 { 00:19:29.764 "name": null, 00:19:29.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.764 "is_configured": false, 00:19:29.764 "data_offset": 0, 00:19:29.764 "data_size": 7936 00:19:29.764 }, 00:19:29.764 { 00:19:29.764 "name": "BaseBdev2", 00:19:29.764 "uuid": "74d6c793-96a7-5372-b705-71f2e7b632ab", 00:19:29.764 "is_configured": true, 00:19:29.764 "data_offset": 256, 00:19:29.764 "data_size": 7936 00:19:29.764 } 00:19:29.764 ] 00:19:29.764 }' 00:19:29.764 16:33:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.764 16:33:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:30.023 16:33:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:30.023 16:33:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:30.023 16:33:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:30.023 16:33:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:30.024 16:33:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:30.024 16:33:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.024 16:33:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.024 16:33:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.024 16:33:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:30.024 16:33:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.283 16:33:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:30.283 "name": "raid_bdev1", 00:19:30.283 "uuid": "21bb3fb7-5469-4147-9b31-b14016a7134c", 00:19:30.283 "strip_size_kb": 0, 00:19:30.283 "state": "online", 00:19:30.283 "raid_level": "raid1", 00:19:30.283 "superblock": true, 00:19:30.283 "num_base_bdevs": 2, 00:19:30.283 "num_base_bdevs_discovered": 1, 00:19:30.283 "num_base_bdevs_operational": 1, 00:19:30.283 "base_bdevs_list": [ 00:19:30.283 { 00:19:30.283 "name": null, 00:19:30.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.283 "is_configured": false, 00:19:30.283 "data_offset": 0, 00:19:30.283 "data_size": 7936 00:19:30.283 }, 00:19:30.283 { 00:19:30.283 "name": "BaseBdev2", 00:19:30.283 "uuid": "74d6c793-96a7-5372-b705-71f2e7b632ab", 00:19:30.283 "is_configured": true, 00:19:30.283 "data_offset": 256, 00:19:30.283 "data_size": 7936 00:19:30.283 } 00:19:30.283 ] 00:19:30.283 }' 00:19:30.283 16:33:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:30.283 16:33:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:30.283 16:33:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:30.283 16:33:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:30.283 16:33:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:30.283 16:33:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.283 16:33:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:30.283 [2024-11-05 16:33:43.217382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:30.283 [2024-11-05 16:33:43.234446] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:19:30.283 16:33:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.283 16:33:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:30.283 [2024-11-05 16:33:43.236362] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:31.222 16:33:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:31.222 16:33:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:31.222 16:33:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:31.222 16:33:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:31.222 16:33:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:31.222 16:33:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.222 16:33:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.222 16:33:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.222 16:33:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:31.222 16:33:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.222 16:33:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:31.222 "name": "raid_bdev1", 00:19:31.222 "uuid": "21bb3fb7-5469-4147-9b31-b14016a7134c", 00:19:31.222 "strip_size_kb": 0, 00:19:31.222 "state": "online", 00:19:31.222 "raid_level": "raid1", 00:19:31.222 "superblock": true, 00:19:31.222 "num_base_bdevs": 2, 00:19:31.222 "num_base_bdevs_discovered": 2, 00:19:31.222 "num_base_bdevs_operational": 2, 00:19:31.222 "process": { 00:19:31.222 "type": "rebuild", 00:19:31.222 "target": "spare", 00:19:31.222 "progress": { 00:19:31.222 "blocks": 2560, 00:19:31.222 "percent": 32 00:19:31.222 } 00:19:31.222 }, 00:19:31.222 "base_bdevs_list": [ 00:19:31.222 { 00:19:31.222 "name": "spare", 00:19:31.222 "uuid": "1cc20a81-166d-54d4-9159-c71ece4f585e", 00:19:31.222 "is_configured": true, 00:19:31.222 "data_offset": 256, 00:19:31.222 "data_size": 7936 00:19:31.222 }, 00:19:31.222 { 00:19:31.222 "name": "BaseBdev2", 00:19:31.222 "uuid": "74d6c793-96a7-5372-b705-71f2e7b632ab", 00:19:31.222 "is_configured": true, 00:19:31.222 "data_offset": 256, 00:19:31.222 "data_size": 7936 00:19:31.222 } 00:19:31.222 ] 00:19:31.222 }' 00:19:31.223 16:33:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:31.482 16:33:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:31.482 16:33:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:31.482 16:33:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:31.482 16:33:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:31.482 16:33:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:31.482 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:31.482 16:33:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:31.482 16:33:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:31.482 16:33:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:31.482 16:33:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=696 00:19:31.482 16:33:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:31.482 16:33:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:31.482 16:33:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:31.482 16:33:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:31.482 16:33:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:31.482 16:33:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:31.482 16:33:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.482 16:33:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.482 16:33:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:31.482 16:33:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.482 16:33:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.482 16:33:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:31.482 "name": "raid_bdev1", 00:19:31.483 "uuid": "21bb3fb7-5469-4147-9b31-b14016a7134c", 00:19:31.483 "strip_size_kb": 0, 00:19:31.483 "state": "online", 00:19:31.483 "raid_level": "raid1", 00:19:31.483 "superblock": true, 00:19:31.483 "num_base_bdevs": 2, 00:19:31.483 "num_base_bdevs_discovered": 2, 00:19:31.483 "num_base_bdevs_operational": 2, 00:19:31.483 "process": { 00:19:31.483 "type": "rebuild", 00:19:31.483 "target": "spare", 00:19:31.483 "progress": { 00:19:31.483 "blocks": 2816, 00:19:31.483 "percent": 35 00:19:31.483 } 00:19:31.483 }, 00:19:31.483 "base_bdevs_list": [ 00:19:31.483 { 00:19:31.483 "name": "spare", 00:19:31.483 "uuid": "1cc20a81-166d-54d4-9159-c71ece4f585e", 00:19:31.483 "is_configured": true, 00:19:31.483 "data_offset": 256, 00:19:31.483 "data_size": 7936 00:19:31.483 }, 00:19:31.483 { 00:19:31.483 "name": "BaseBdev2", 00:19:31.483 "uuid": "74d6c793-96a7-5372-b705-71f2e7b632ab", 00:19:31.483 "is_configured": true, 00:19:31.483 "data_offset": 256, 00:19:31.483 "data_size": 7936 00:19:31.483 } 00:19:31.483 ] 00:19:31.483 }' 00:19:31.483 16:33:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:31.483 16:33:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:31.483 16:33:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:31.483 16:33:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:31.483 16:33:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:32.863 16:33:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:32.863 16:33:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:32.863 16:33:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:32.863 16:33:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:32.863 16:33:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:32.863 16:33:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:32.863 16:33:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.863 16:33:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.863 16:33:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.863 16:33:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:32.863 16:33:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.863 16:33:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:32.863 "name": "raid_bdev1", 00:19:32.863 "uuid": "21bb3fb7-5469-4147-9b31-b14016a7134c", 00:19:32.863 "strip_size_kb": 0, 00:19:32.863 "state": "online", 00:19:32.863 "raid_level": "raid1", 00:19:32.863 "superblock": true, 00:19:32.863 "num_base_bdevs": 2, 00:19:32.863 "num_base_bdevs_discovered": 2, 00:19:32.863 "num_base_bdevs_operational": 2, 00:19:32.863 "process": { 00:19:32.863 "type": "rebuild", 00:19:32.863 "target": "spare", 00:19:32.863 "progress": { 00:19:32.863 "blocks": 5888, 00:19:32.863 "percent": 74 00:19:32.863 } 00:19:32.863 }, 00:19:32.863 "base_bdevs_list": [ 00:19:32.863 { 00:19:32.863 "name": "spare", 00:19:32.863 "uuid": "1cc20a81-166d-54d4-9159-c71ece4f585e", 00:19:32.863 "is_configured": true, 00:19:32.863 "data_offset": 256, 00:19:32.863 "data_size": 7936 00:19:32.863 }, 00:19:32.863 { 00:19:32.863 "name": "BaseBdev2", 00:19:32.863 "uuid": "74d6c793-96a7-5372-b705-71f2e7b632ab", 00:19:32.863 "is_configured": true, 00:19:32.863 "data_offset": 256, 00:19:32.863 "data_size": 7936 00:19:32.863 } 00:19:32.863 ] 00:19:32.863 }' 00:19:32.863 16:33:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:32.863 16:33:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:32.863 16:33:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:32.863 16:33:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:32.863 16:33:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:33.432 [2024-11-05 16:33:46.350022] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:33.432 [2024-11-05 16:33:46.350093] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:33.432 [2024-11-05 16:33:46.350199] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:33.694 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:33.694 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:33.694 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:33.694 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:33.694 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:33.694 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:33.694 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.694 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.694 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:33.694 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.694 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.694 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:33.694 "name": "raid_bdev1", 00:19:33.694 "uuid": "21bb3fb7-5469-4147-9b31-b14016a7134c", 00:19:33.694 "strip_size_kb": 0, 00:19:33.694 "state": "online", 00:19:33.694 "raid_level": "raid1", 00:19:33.694 "superblock": true, 00:19:33.694 "num_base_bdevs": 2, 00:19:33.694 "num_base_bdevs_discovered": 2, 00:19:33.694 "num_base_bdevs_operational": 2, 00:19:33.694 "base_bdevs_list": [ 00:19:33.694 { 00:19:33.694 "name": "spare", 00:19:33.694 "uuid": "1cc20a81-166d-54d4-9159-c71ece4f585e", 00:19:33.694 "is_configured": true, 00:19:33.694 "data_offset": 256, 00:19:33.694 "data_size": 7936 00:19:33.694 }, 00:19:33.694 { 00:19:33.694 "name": "BaseBdev2", 00:19:33.694 "uuid": "74d6c793-96a7-5372-b705-71f2e7b632ab", 00:19:33.694 "is_configured": true, 00:19:33.694 "data_offset": 256, 00:19:33.694 "data_size": 7936 00:19:33.694 } 00:19:33.694 ] 00:19:33.694 }' 00:19:33.694 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:33.694 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:33.694 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:33.961 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:33.961 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:19:33.961 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:33.961 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:33.961 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:33.961 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:33.961 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:33.961 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.961 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.961 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.961 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:33.961 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.961 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:33.961 "name": "raid_bdev1", 00:19:33.961 "uuid": "21bb3fb7-5469-4147-9b31-b14016a7134c", 00:19:33.961 "strip_size_kb": 0, 00:19:33.961 "state": "online", 00:19:33.961 "raid_level": "raid1", 00:19:33.961 "superblock": true, 00:19:33.961 "num_base_bdevs": 2, 00:19:33.961 "num_base_bdevs_discovered": 2, 00:19:33.961 "num_base_bdevs_operational": 2, 00:19:33.961 "base_bdevs_list": [ 00:19:33.961 { 00:19:33.961 "name": "spare", 00:19:33.961 "uuid": "1cc20a81-166d-54d4-9159-c71ece4f585e", 00:19:33.961 "is_configured": true, 00:19:33.961 "data_offset": 256, 00:19:33.961 "data_size": 7936 00:19:33.961 }, 00:19:33.961 { 00:19:33.961 "name": "BaseBdev2", 00:19:33.961 "uuid": "74d6c793-96a7-5372-b705-71f2e7b632ab", 00:19:33.961 "is_configured": true, 00:19:33.961 "data_offset": 256, 00:19:33.961 "data_size": 7936 00:19:33.961 } 00:19:33.961 ] 00:19:33.961 }' 00:19:33.961 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:33.961 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:33.961 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:33.961 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:33.961 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:33.961 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:33.961 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:33.961 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:33.961 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:33.961 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:33.961 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:33.961 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:33.961 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:33.961 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:33.961 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.961 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.961 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.961 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:33.961 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.961 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:33.961 "name": "raid_bdev1", 00:19:33.961 "uuid": "21bb3fb7-5469-4147-9b31-b14016a7134c", 00:19:33.961 "strip_size_kb": 0, 00:19:33.961 "state": "online", 00:19:33.961 "raid_level": "raid1", 00:19:33.961 "superblock": true, 00:19:33.961 "num_base_bdevs": 2, 00:19:33.961 "num_base_bdevs_discovered": 2, 00:19:33.961 "num_base_bdevs_operational": 2, 00:19:33.961 "base_bdevs_list": [ 00:19:33.961 { 00:19:33.961 "name": "spare", 00:19:33.961 "uuid": "1cc20a81-166d-54d4-9159-c71ece4f585e", 00:19:33.961 "is_configured": true, 00:19:33.961 "data_offset": 256, 00:19:33.961 "data_size": 7936 00:19:33.961 }, 00:19:33.961 { 00:19:33.961 "name": "BaseBdev2", 00:19:33.961 "uuid": "74d6c793-96a7-5372-b705-71f2e7b632ab", 00:19:33.961 "is_configured": true, 00:19:33.961 "data_offset": 256, 00:19:33.961 "data_size": 7936 00:19:33.962 } 00:19:33.962 ] 00:19:33.962 }' 00:19:33.962 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:33.962 16:33:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:34.531 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:34.531 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.531 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:34.531 [2024-11-05 16:33:47.368316] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:34.531 [2024-11-05 16:33:47.368418] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:34.531 [2024-11-05 16:33:47.368512] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:34.531 [2024-11-05 16:33:47.368612] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:34.531 [2024-11-05 16:33:47.368626] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:34.531 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.531 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.531 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:19:34.531 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.531 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:34.531 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.531 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:34.531 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:34.531 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:34.531 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:34.531 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:34.531 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:34.531 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:34.531 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:34.531 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:34.531 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:19:34.531 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:34.531 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:34.531 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:34.791 /dev/nbd0 00:19:34.791 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:34.791 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:34.791 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:34.791 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:19:34.791 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:34.791 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:34.791 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:34.791 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:19:34.791 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:34.791 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:34.791 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:34.791 1+0 records in 00:19:34.791 1+0 records out 00:19:34.791 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000332596 s, 12.3 MB/s 00:19:34.791 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:34.791 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:19:34.791 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:34.792 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:34.792 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:19:34.792 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:34.792 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:34.792 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:35.051 /dev/nbd1 00:19:35.051 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:35.051 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:35.051 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:19:35.051 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:19:35.051 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:35.051 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:35.051 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:19:35.051 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:19:35.051 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:35.051 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:35.051 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:35.051 1+0 records in 00:19:35.051 1+0 records out 00:19:35.051 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371467 s, 11.0 MB/s 00:19:35.051 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:35.051 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:19:35.051 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:35.051 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:35.051 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:19:35.051 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:35.051 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:35.051 16:33:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:35.051 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:35.051 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:35.051 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:35.051 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:35.051 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:19:35.052 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:35.052 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:35.311 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:35.311 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:35.311 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:35.311 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:35.311 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:35.311 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:35.311 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:35.311 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:35.311 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:35.311 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:35.571 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:35.571 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:35.571 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:35.571 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:35.571 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:35.571 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:35.571 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:35.571 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:35.571 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:35.571 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:35.571 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.571 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:35.571 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.571 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:35.571 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.571 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:35.571 [2024-11-05 16:33:48.571042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:35.571 [2024-11-05 16:33:48.571104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.571 [2024-11-05 16:33:48.571130] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:35.571 [2024-11-05 16:33:48.571138] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.571 [2024-11-05 16:33:48.573385] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.571 [2024-11-05 16:33:48.573476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:35.571 [2024-11-05 16:33:48.573601] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:35.571 [2024-11-05 16:33:48.573662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:35.571 [2024-11-05 16:33:48.573841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:35.571 spare 00:19:35.571 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.571 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:35.571 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.571 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:35.831 [2024-11-05 16:33:48.673750] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:35.831 [2024-11-05 16:33:48.673784] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:35.831 [2024-11-05 16:33:48.674092] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:19:35.831 [2024-11-05 16:33:48.674279] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:35.831 [2024-11-05 16:33:48.674290] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:35.831 [2024-11-05 16:33:48.674469] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:35.831 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.831 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:35.831 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:35.831 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:35.831 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:35.831 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:35.831 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:35.831 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.831 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.831 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.831 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.831 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.831 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.831 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.831 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:35.831 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.831 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.831 "name": "raid_bdev1", 00:19:35.831 "uuid": "21bb3fb7-5469-4147-9b31-b14016a7134c", 00:19:35.831 "strip_size_kb": 0, 00:19:35.831 "state": "online", 00:19:35.831 "raid_level": "raid1", 00:19:35.831 "superblock": true, 00:19:35.831 "num_base_bdevs": 2, 00:19:35.831 "num_base_bdevs_discovered": 2, 00:19:35.831 "num_base_bdevs_operational": 2, 00:19:35.831 "base_bdevs_list": [ 00:19:35.831 { 00:19:35.831 "name": "spare", 00:19:35.831 "uuid": "1cc20a81-166d-54d4-9159-c71ece4f585e", 00:19:35.831 "is_configured": true, 00:19:35.831 "data_offset": 256, 00:19:35.831 "data_size": 7936 00:19:35.831 }, 00:19:35.831 { 00:19:35.831 "name": "BaseBdev2", 00:19:35.831 "uuid": "74d6c793-96a7-5372-b705-71f2e7b632ab", 00:19:35.831 "is_configured": true, 00:19:35.831 "data_offset": 256, 00:19:35.831 "data_size": 7936 00:19:35.831 } 00:19:35.831 ] 00:19:35.831 }' 00:19:35.831 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.831 16:33:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:36.091 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:36.091 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:36.091 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:36.091 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:36.091 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:36.091 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.091 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.091 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.091 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:36.091 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.352 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:36.352 "name": "raid_bdev1", 00:19:36.352 "uuid": "21bb3fb7-5469-4147-9b31-b14016a7134c", 00:19:36.352 "strip_size_kb": 0, 00:19:36.352 "state": "online", 00:19:36.352 "raid_level": "raid1", 00:19:36.352 "superblock": true, 00:19:36.352 "num_base_bdevs": 2, 00:19:36.352 "num_base_bdevs_discovered": 2, 00:19:36.352 "num_base_bdevs_operational": 2, 00:19:36.352 "base_bdevs_list": [ 00:19:36.352 { 00:19:36.352 "name": "spare", 00:19:36.352 "uuid": "1cc20a81-166d-54d4-9159-c71ece4f585e", 00:19:36.352 "is_configured": true, 00:19:36.352 "data_offset": 256, 00:19:36.352 "data_size": 7936 00:19:36.352 }, 00:19:36.352 { 00:19:36.352 "name": "BaseBdev2", 00:19:36.352 "uuid": "74d6c793-96a7-5372-b705-71f2e7b632ab", 00:19:36.352 "is_configured": true, 00:19:36.352 "data_offset": 256, 00:19:36.352 "data_size": 7936 00:19:36.352 } 00:19:36.352 ] 00:19:36.352 }' 00:19:36.352 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:36.352 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:36.352 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:36.352 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:36.352 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.352 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.352 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:36.352 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:36.352 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.352 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:36.352 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:36.352 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.352 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:36.352 [2024-11-05 16:33:49.341796] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:36.352 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.352 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:36.352 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:36.352 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:36.352 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:36.352 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:36.352 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:36.352 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:36.352 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:36.352 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:36.352 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:36.352 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.352 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.352 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:36.352 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.352 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.352 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:36.352 "name": "raid_bdev1", 00:19:36.352 "uuid": "21bb3fb7-5469-4147-9b31-b14016a7134c", 00:19:36.352 "strip_size_kb": 0, 00:19:36.352 "state": "online", 00:19:36.352 "raid_level": "raid1", 00:19:36.352 "superblock": true, 00:19:36.352 "num_base_bdevs": 2, 00:19:36.352 "num_base_bdevs_discovered": 1, 00:19:36.352 "num_base_bdevs_operational": 1, 00:19:36.352 "base_bdevs_list": [ 00:19:36.352 { 00:19:36.352 "name": null, 00:19:36.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.352 "is_configured": false, 00:19:36.352 "data_offset": 0, 00:19:36.352 "data_size": 7936 00:19:36.352 }, 00:19:36.352 { 00:19:36.352 "name": "BaseBdev2", 00:19:36.352 "uuid": "74d6c793-96a7-5372-b705-71f2e7b632ab", 00:19:36.352 "is_configured": true, 00:19:36.352 "data_offset": 256, 00:19:36.352 "data_size": 7936 00:19:36.352 } 00:19:36.352 ] 00:19:36.352 }' 00:19:36.352 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:36.352 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:36.921 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:36.921 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.921 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:36.921 [2024-11-05 16:33:49.761109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:36.921 [2024-11-05 16:33:49.761367] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:36.921 [2024-11-05 16:33:49.761429] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:36.921 [2024-11-05 16:33:49.761485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:36.921 [2024-11-05 16:33:49.777962] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:19:36.921 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.921 16:33:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:36.921 [2024-11-05 16:33:49.779884] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:37.860 16:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:37.860 16:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:37.860 16:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:37.860 16:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:37.860 16:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:37.860 16:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.860 16:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.860 16:33:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.860 16:33:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:37.860 16:33:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.860 16:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:37.860 "name": "raid_bdev1", 00:19:37.860 "uuid": "21bb3fb7-5469-4147-9b31-b14016a7134c", 00:19:37.860 "strip_size_kb": 0, 00:19:37.860 "state": "online", 00:19:37.860 "raid_level": "raid1", 00:19:37.860 "superblock": true, 00:19:37.860 "num_base_bdevs": 2, 00:19:37.860 "num_base_bdevs_discovered": 2, 00:19:37.860 "num_base_bdevs_operational": 2, 00:19:37.860 "process": { 00:19:37.860 "type": "rebuild", 00:19:37.860 "target": "spare", 00:19:37.860 "progress": { 00:19:37.860 "blocks": 2560, 00:19:37.860 "percent": 32 00:19:37.860 } 00:19:37.860 }, 00:19:37.860 "base_bdevs_list": [ 00:19:37.860 { 00:19:37.860 "name": "spare", 00:19:37.860 "uuid": "1cc20a81-166d-54d4-9159-c71ece4f585e", 00:19:37.860 "is_configured": true, 00:19:37.860 "data_offset": 256, 00:19:37.860 "data_size": 7936 00:19:37.860 }, 00:19:37.860 { 00:19:37.860 "name": "BaseBdev2", 00:19:37.860 "uuid": "74d6c793-96a7-5372-b705-71f2e7b632ab", 00:19:37.860 "is_configured": true, 00:19:37.860 "data_offset": 256, 00:19:37.860 "data_size": 7936 00:19:37.860 } 00:19:37.860 ] 00:19:37.860 }' 00:19:37.860 16:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:37.860 16:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:37.860 16:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:37.860 16:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:37.860 16:33:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:37.860 16:33:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.860 16:33:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:37.860 [2024-11-05 16:33:50.943686] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:38.120 [2024-11-05 16:33:50.985274] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:38.120 [2024-11-05 16:33:50.985353] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:38.120 [2024-11-05 16:33:50.985368] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:38.120 [2024-11-05 16:33:50.985378] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:38.120 16:33:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.120 16:33:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:38.120 16:33:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:38.120 16:33:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:38.120 16:33:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:38.120 16:33:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:38.120 16:33:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:38.120 16:33:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:38.120 16:33:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:38.120 16:33:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:38.120 16:33:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:38.120 16:33:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.120 16:33:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.120 16:33:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.120 16:33:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:38.120 16:33:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.120 16:33:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:38.120 "name": "raid_bdev1", 00:19:38.120 "uuid": "21bb3fb7-5469-4147-9b31-b14016a7134c", 00:19:38.120 "strip_size_kb": 0, 00:19:38.120 "state": "online", 00:19:38.120 "raid_level": "raid1", 00:19:38.120 "superblock": true, 00:19:38.120 "num_base_bdevs": 2, 00:19:38.120 "num_base_bdevs_discovered": 1, 00:19:38.120 "num_base_bdevs_operational": 1, 00:19:38.120 "base_bdevs_list": [ 00:19:38.120 { 00:19:38.120 "name": null, 00:19:38.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.120 "is_configured": false, 00:19:38.120 "data_offset": 0, 00:19:38.120 "data_size": 7936 00:19:38.120 }, 00:19:38.120 { 00:19:38.120 "name": "BaseBdev2", 00:19:38.120 "uuid": "74d6c793-96a7-5372-b705-71f2e7b632ab", 00:19:38.120 "is_configured": true, 00:19:38.120 "data_offset": 256, 00:19:38.120 "data_size": 7936 00:19:38.120 } 00:19:38.120 ] 00:19:38.120 }' 00:19:38.120 16:33:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:38.120 16:33:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:38.380 16:33:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:38.380 16:33:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.380 16:33:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:38.380 [2024-11-05 16:33:51.464553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:38.380 [2024-11-05 16:33:51.464668] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:38.380 [2024-11-05 16:33:51.464708] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:38.380 [2024-11-05 16:33:51.464739] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:38.380 [2024-11-05 16:33:51.465293] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:38.380 [2024-11-05 16:33:51.465369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:38.380 [2024-11-05 16:33:51.465502] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:38.380 [2024-11-05 16:33:51.465571] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:38.380 [2024-11-05 16:33:51.465623] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:38.380 [2024-11-05 16:33:51.465674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:38.639 [2024-11-05 16:33:51.482826] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:19:38.639 spare 00:19:38.639 16:33:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.639 [2024-11-05 16:33:51.484758] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:38.639 16:33:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:39.576 16:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:39.576 16:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:39.576 16:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:39.576 16:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:39.576 16:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:39.576 16:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.576 16:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.576 16:33:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.576 16:33:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:39.576 16:33:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.576 16:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:39.576 "name": "raid_bdev1", 00:19:39.576 "uuid": "21bb3fb7-5469-4147-9b31-b14016a7134c", 00:19:39.576 "strip_size_kb": 0, 00:19:39.576 "state": "online", 00:19:39.576 "raid_level": "raid1", 00:19:39.576 "superblock": true, 00:19:39.576 "num_base_bdevs": 2, 00:19:39.576 "num_base_bdevs_discovered": 2, 00:19:39.576 "num_base_bdevs_operational": 2, 00:19:39.576 "process": { 00:19:39.576 "type": "rebuild", 00:19:39.576 "target": "spare", 00:19:39.576 "progress": { 00:19:39.576 "blocks": 2560, 00:19:39.576 "percent": 32 00:19:39.576 } 00:19:39.576 }, 00:19:39.576 "base_bdevs_list": [ 00:19:39.576 { 00:19:39.576 "name": "spare", 00:19:39.576 "uuid": "1cc20a81-166d-54d4-9159-c71ece4f585e", 00:19:39.576 "is_configured": true, 00:19:39.576 "data_offset": 256, 00:19:39.576 "data_size": 7936 00:19:39.576 }, 00:19:39.576 { 00:19:39.576 "name": "BaseBdev2", 00:19:39.576 "uuid": "74d6c793-96a7-5372-b705-71f2e7b632ab", 00:19:39.576 "is_configured": true, 00:19:39.576 "data_offset": 256, 00:19:39.576 "data_size": 7936 00:19:39.576 } 00:19:39.576 ] 00:19:39.576 }' 00:19:39.576 16:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:39.576 16:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:39.576 16:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:39.576 16:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:39.576 16:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:39.576 16:33:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.576 16:33:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:39.576 [2024-11-05 16:33:52.620503] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:39.836 [2024-11-05 16:33:52.690084] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:39.836 [2024-11-05 16:33:52.690158] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:39.836 [2024-11-05 16:33:52.690175] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:39.836 [2024-11-05 16:33:52.690182] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:39.836 16:33:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.836 16:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:39.836 16:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:39.836 16:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:39.836 16:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:39.836 16:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:39.836 16:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:39.836 16:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:39.836 16:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:39.836 16:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:39.836 16:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:39.836 16:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.836 16:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.836 16:33:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.836 16:33:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:39.836 16:33:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.836 16:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:39.836 "name": "raid_bdev1", 00:19:39.836 "uuid": "21bb3fb7-5469-4147-9b31-b14016a7134c", 00:19:39.836 "strip_size_kb": 0, 00:19:39.836 "state": "online", 00:19:39.836 "raid_level": "raid1", 00:19:39.836 "superblock": true, 00:19:39.836 "num_base_bdevs": 2, 00:19:39.836 "num_base_bdevs_discovered": 1, 00:19:39.836 "num_base_bdevs_operational": 1, 00:19:39.836 "base_bdevs_list": [ 00:19:39.836 { 00:19:39.836 "name": null, 00:19:39.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.836 "is_configured": false, 00:19:39.836 "data_offset": 0, 00:19:39.836 "data_size": 7936 00:19:39.836 }, 00:19:39.836 { 00:19:39.836 "name": "BaseBdev2", 00:19:39.836 "uuid": "74d6c793-96a7-5372-b705-71f2e7b632ab", 00:19:39.836 "is_configured": true, 00:19:39.836 "data_offset": 256, 00:19:39.836 "data_size": 7936 00:19:39.836 } 00:19:39.836 ] 00:19:39.836 }' 00:19:39.836 16:33:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:39.836 16:33:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:40.096 16:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:40.096 16:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:40.096 16:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:40.096 16:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:40.096 16:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:40.096 16:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.096 16:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.096 16:33:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.096 16:33:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:40.096 16:33:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.356 16:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:40.356 "name": "raid_bdev1", 00:19:40.356 "uuid": "21bb3fb7-5469-4147-9b31-b14016a7134c", 00:19:40.356 "strip_size_kb": 0, 00:19:40.356 "state": "online", 00:19:40.356 "raid_level": "raid1", 00:19:40.356 "superblock": true, 00:19:40.356 "num_base_bdevs": 2, 00:19:40.356 "num_base_bdevs_discovered": 1, 00:19:40.356 "num_base_bdevs_operational": 1, 00:19:40.356 "base_bdevs_list": [ 00:19:40.356 { 00:19:40.356 "name": null, 00:19:40.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.356 "is_configured": false, 00:19:40.356 "data_offset": 0, 00:19:40.356 "data_size": 7936 00:19:40.356 }, 00:19:40.356 { 00:19:40.356 "name": "BaseBdev2", 00:19:40.356 "uuid": "74d6c793-96a7-5372-b705-71f2e7b632ab", 00:19:40.356 "is_configured": true, 00:19:40.356 "data_offset": 256, 00:19:40.356 "data_size": 7936 00:19:40.356 } 00:19:40.356 ] 00:19:40.356 }' 00:19:40.356 16:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:40.356 16:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:40.356 16:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:40.356 16:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:40.356 16:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:40.356 16:33:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.356 16:33:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:40.356 16:33:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.356 16:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:40.356 16:33:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.356 16:33:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:40.356 [2024-11-05 16:33:53.304169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:40.356 [2024-11-05 16:33:53.304265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.356 [2024-11-05 16:33:53.304305] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:40.356 [2024-11-05 16:33:53.304347] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.356 [2024-11-05 16:33:53.304841] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.356 [2024-11-05 16:33:53.304900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:40.356 [2024-11-05 16:33:53.305014] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:40.356 [2024-11-05 16:33:53.305054] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:40.356 [2024-11-05 16:33:53.305095] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:40.356 [2024-11-05 16:33:53.305141] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:40.356 BaseBdev1 00:19:40.356 16:33:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.356 16:33:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:41.298 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:41.298 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:41.298 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:41.298 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:41.298 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:41.298 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:41.298 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:41.298 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:41.298 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:41.298 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:41.298 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.298 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.298 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.298 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:41.298 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.298 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:41.298 "name": "raid_bdev1", 00:19:41.298 "uuid": "21bb3fb7-5469-4147-9b31-b14016a7134c", 00:19:41.298 "strip_size_kb": 0, 00:19:41.298 "state": "online", 00:19:41.298 "raid_level": "raid1", 00:19:41.298 "superblock": true, 00:19:41.298 "num_base_bdevs": 2, 00:19:41.298 "num_base_bdevs_discovered": 1, 00:19:41.298 "num_base_bdevs_operational": 1, 00:19:41.298 "base_bdevs_list": [ 00:19:41.298 { 00:19:41.298 "name": null, 00:19:41.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.298 "is_configured": false, 00:19:41.298 "data_offset": 0, 00:19:41.298 "data_size": 7936 00:19:41.298 }, 00:19:41.298 { 00:19:41.298 "name": "BaseBdev2", 00:19:41.298 "uuid": "74d6c793-96a7-5372-b705-71f2e7b632ab", 00:19:41.298 "is_configured": true, 00:19:41.298 "data_offset": 256, 00:19:41.298 "data_size": 7936 00:19:41.298 } 00:19:41.298 ] 00:19:41.298 }' 00:19:41.298 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:41.298 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:41.867 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:41.867 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:41.867 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:41.867 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:41.867 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:41.867 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.867 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.867 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.867 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:41.867 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.867 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:41.867 "name": "raid_bdev1", 00:19:41.867 "uuid": "21bb3fb7-5469-4147-9b31-b14016a7134c", 00:19:41.867 "strip_size_kb": 0, 00:19:41.867 "state": "online", 00:19:41.867 "raid_level": "raid1", 00:19:41.867 "superblock": true, 00:19:41.867 "num_base_bdevs": 2, 00:19:41.867 "num_base_bdevs_discovered": 1, 00:19:41.867 "num_base_bdevs_operational": 1, 00:19:41.867 "base_bdevs_list": [ 00:19:41.867 { 00:19:41.867 "name": null, 00:19:41.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.867 "is_configured": false, 00:19:41.867 "data_offset": 0, 00:19:41.867 "data_size": 7936 00:19:41.867 }, 00:19:41.867 { 00:19:41.867 "name": "BaseBdev2", 00:19:41.867 "uuid": "74d6c793-96a7-5372-b705-71f2e7b632ab", 00:19:41.867 "is_configured": true, 00:19:41.867 "data_offset": 256, 00:19:41.867 "data_size": 7936 00:19:41.867 } 00:19:41.867 ] 00:19:41.867 }' 00:19:41.867 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:41.867 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:41.867 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:41.867 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:41.867 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:41.867 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:19:41.867 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:41.867 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:41.867 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:41.867 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:41.867 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:41.867 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:41.867 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.868 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:41.868 [2024-11-05 16:33:54.897504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:41.868 [2024-11-05 16:33:54.897688] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:41.868 [2024-11-05 16:33:54.897704] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:41.868 request: 00:19:41.868 { 00:19:41.868 "base_bdev": "BaseBdev1", 00:19:41.868 "raid_bdev": "raid_bdev1", 00:19:41.868 "method": "bdev_raid_add_base_bdev", 00:19:41.868 "req_id": 1 00:19:41.868 } 00:19:41.868 Got JSON-RPC error response 00:19:41.868 response: 00:19:41.868 { 00:19:41.868 "code": -22, 00:19:41.868 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:41.868 } 00:19:41.868 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:41.868 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:19:41.868 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:41.868 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:41.868 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:41.868 16:33:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:43.250 16:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:43.250 16:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:43.250 16:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:43.250 16:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:43.250 16:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:43.250 16:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:43.250 16:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.250 16:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.250 16:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.250 16:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.250 16:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.250 16:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.250 16:33:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.250 16:33:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:43.250 16:33:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.250 16:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:43.251 "name": "raid_bdev1", 00:19:43.251 "uuid": "21bb3fb7-5469-4147-9b31-b14016a7134c", 00:19:43.251 "strip_size_kb": 0, 00:19:43.251 "state": "online", 00:19:43.251 "raid_level": "raid1", 00:19:43.251 "superblock": true, 00:19:43.251 "num_base_bdevs": 2, 00:19:43.251 "num_base_bdevs_discovered": 1, 00:19:43.251 "num_base_bdevs_operational": 1, 00:19:43.251 "base_bdevs_list": [ 00:19:43.251 { 00:19:43.251 "name": null, 00:19:43.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.251 "is_configured": false, 00:19:43.251 "data_offset": 0, 00:19:43.251 "data_size": 7936 00:19:43.251 }, 00:19:43.251 { 00:19:43.251 "name": "BaseBdev2", 00:19:43.251 "uuid": "74d6c793-96a7-5372-b705-71f2e7b632ab", 00:19:43.251 "is_configured": true, 00:19:43.251 "data_offset": 256, 00:19:43.251 "data_size": 7936 00:19:43.251 } 00:19:43.251 ] 00:19:43.251 }' 00:19:43.251 16:33:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:43.251 16:33:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:43.511 16:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:43.511 16:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:43.511 16:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:43.511 16:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:43.511 16:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:43.511 16:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.511 16:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.511 16:33:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.511 16:33:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:43.511 16:33:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.511 16:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:43.511 "name": "raid_bdev1", 00:19:43.511 "uuid": "21bb3fb7-5469-4147-9b31-b14016a7134c", 00:19:43.511 "strip_size_kb": 0, 00:19:43.511 "state": "online", 00:19:43.511 "raid_level": "raid1", 00:19:43.511 "superblock": true, 00:19:43.511 "num_base_bdevs": 2, 00:19:43.511 "num_base_bdevs_discovered": 1, 00:19:43.511 "num_base_bdevs_operational": 1, 00:19:43.511 "base_bdevs_list": [ 00:19:43.511 { 00:19:43.511 "name": null, 00:19:43.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.511 "is_configured": false, 00:19:43.511 "data_offset": 0, 00:19:43.511 "data_size": 7936 00:19:43.511 }, 00:19:43.511 { 00:19:43.511 "name": "BaseBdev2", 00:19:43.511 "uuid": "74d6c793-96a7-5372-b705-71f2e7b632ab", 00:19:43.511 "is_configured": true, 00:19:43.511 "data_offset": 256, 00:19:43.511 "data_size": 7936 00:19:43.511 } 00:19:43.511 ] 00:19:43.511 }' 00:19:43.511 16:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:43.511 16:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:43.511 16:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:43.511 16:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:43.511 16:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86862 00:19:43.511 16:33:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # '[' -z 86862 ']' 00:19:43.511 16:33:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # kill -0 86862 00:19:43.511 16:33:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # uname 00:19:43.511 16:33:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:43.511 16:33:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86862 00:19:43.511 killing process with pid 86862 00:19:43.511 Received shutdown signal, test time was about 60.000000 seconds 00:19:43.511 00:19:43.511 Latency(us) 00:19:43.511 [2024-11-05T16:33:56.599Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.511 [2024-11-05T16:33:56.599Z] =================================================================================================================== 00:19:43.511 [2024-11-05T16:33:56.599Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:43.511 16:33:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:43.511 16:33:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:43.511 16:33:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86862' 00:19:43.511 16:33:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@971 -- # kill 86862 00:19:43.511 [2024-11-05 16:33:56.571933] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:43.511 [2024-11-05 16:33:56.572071] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:43.511 16:33:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@976 -- # wait 86862 00:19:43.511 [2024-11-05 16:33:56.572131] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:43.511 [2024-11-05 16:33:56.572145] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:44.080 [2024-11-05 16:33:56.885342] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:45.019 16:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:19:45.019 00:19:45.019 real 0m19.847s 00:19:45.019 user 0m25.986s 00:19:45.019 sys 0m2.508s 00:19:45.019 16:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:45.019 ************************************ 00:19:45.019 END TEST raid_rebuild_test_sb_4k 00:19:45.019 ************************************ 00:19:45.019 16:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:45.019 16:33:58 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:19:45.019 16:33:58 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:19:45.019 16:33:58 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:19:45.019 16:33:58 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:45.019 16:33:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:45.019 ************************************ 00:19:45.019 START TEST raid_state_function_test_sb_md_separate 00:19:45.019 ************************************ 00:19:45.019 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:19:45.019 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:45.019 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:45.019 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:45.019 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:45.019 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:45.019 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:45.019 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:45.019 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:45.019 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:45.019 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:45.019 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:45.019 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:45.019 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:45.019 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:45.019 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:45.019 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:45.019 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:45.019 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:45.019 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:45.019 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:45.019 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:45.019 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:45.019 Process raid pid: 87557 00:19:45.019 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87557 00:19:45.019 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87557' 00:19:45.019 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87557 00:19:45.019 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@833 -- # '[' -z 87557 ']' 00:19:45.019 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.019 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:45.019 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.019 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:45.019 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.019 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:45.279 [2024-11-05 16:33:58.144023] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:19:45.279 [2024-11-05 16:33:58.144277] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:45.279 [2024-11-05 16:33:58.320622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.538 [2024-11-05 16:33:58.431265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.800 [2024-11-05 16:33:58.634619] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:45.800 [2024-11-05 16:33:58.634735] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:46.062 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:46.062 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@866 -- # return 0 00:19:46.062 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:46.062 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.062 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:46.062 [2024-11-05 16:33:58.980347] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:46.062 [2024-11-05 16:33:58.980402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:46.062 [2024-11-05 16:33:58.980415] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:46.062 [2024-11-05 16:33:58.980426] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:46.062 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.062 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:46.062 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:46.062 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:46.062 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:46.062 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:46.062 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:46.062 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:46.062 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:46.062 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:46.062 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:46.062 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.062 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:46.062 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.062 16:33:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:46.062 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.062 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:46.062 "name": "Existed_Raid", 00:19:46.062 "uuid": "b71f9831-84ff-463d-bcef-7c24894c80f6", 00:19:46.062 "strip_size_kb": 0, 00:19:46.062 "state": "configuring", 00:19:46.062 "raid_level": "raid1", 00:19:46.062 "superblock": true, 00:19:46.062 "num_base_bdevs": 2, 00:19:46.062 "num_base_bdevs_discovered": 0, 00:19:46.062 "num_base_bdevs_operational": 2, 00:19:46.062 "base_bdevs_list": [ 00:19:46.062 { 00:19:46.062 "name": "BaseBdev1", 00:19:46.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.062 "is_configured": false, 00:19:46.062 "data_offset": 0, 00:19:46.062 "data_size": 0 00:19:46.062 }, 00:19:46.062 { 00:19:46.062 "name": "BaseBdev2", 00:19:46.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.062 "is_configured": false, 00:19:46.062 "data_offset": 0, 00:19:46.062 "data_size": 0 00:19:46.062 } 00:19:46.062 ] 00:19:46.062 }' 00:19:46.062 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:46.062 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:46.630 [2024-11-05 16:33:59.435512] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:46.630 [2024-11-05 16:33:59.435613] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:46.630 [2024-11-05 16:33:59.447481] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:46.630 [2024-11-05 16:33:59.447576] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:46.630 [2024-11-05 16:33:59.447607] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:46.630 [2024-11-05 16:33:59.447619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:46.630 [2024-11-05 16:33:59.497411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:46.630 BaseBdev1 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local i 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:46.630 [ 00:19:46.630 { 00:19:46.630 "name": "BaseBdev1", 00:19:46.630 "aliases": [ 00:19:46.630 "de7aa482-04ed-4291-9485-f2a17e59684d" 00:19:46.630 ], 00:19:46.630 "product_name": "Malloc disk", 00:19:46.630 "block_size": 4096, 00:19:46.630 "num_blocks": 8192, 00:19:46.630 "uuid": "de7aa482-04ed-4291-9485-f2a17e59684d", 00:19:46.630 "md_size": 32, 00:19:46.630 "md_interleave": false, 00:19:46.630 "dif_type": 0, 00:19:46.630 "assigned_rate_limits": { 00:19:46.630 "rw_ios_per_sec": 0, 00:19:46.630 "rw_mbytes_per_sec": 0, 00:19:46.630 "r_mbytes_per_sec": 0, 00:19:46.630 "w_mbytes_per_sec": 0 00:19:46.630 }, 00:19:46.630 "claimed": true, 00:19:46.630 "claim_type": "exclusive_write", 00:19:46.630 "zoned": false, 00:19:46.630 "supported_io_types": { 00:19:46.630 "read": true, 00:19:46.630 "write": true, 00:19:46.630 "unmap": true, 00:19:46.630 "flush": true, 00:19:46.630 "reset": true, 00:19:46.630 "nvme_admin": false, 00:19:46.630 "nvme_io": false, 00:19:46.630 "nvme_io_md": false, 00:19:46.630 "write_zeroes": true, 00:19:46.630 "zcopy": true, 00:19:46.630 "get_zone_info": false, 00:19:46.630 "zone_management": false, 00:19:46.630 "zone_append": false, 00:19:46.630 "compare": false, 00:19:46.630 "compare_and_write": false, 00:19:46.630 "abort": true, 00:19:46.630 "seek_hole": false, 00:19:46.630 "seek_data": false, 00:19:46.630 "copy": true, 00:19:46.630 "nvme_iov_md": false 00:19:46.630 }, 00:19:46.630 "memory_domains": [ 00:19:46.630 { 00:19:46.630 "dma_device_id": "system", 00:19:46.630 "dma_device_type": 1 00:19:46.630 }, 00:19:46.630 { 00:19:46.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:46.630 "dma_device_type": 2 00:19:46.630 } 00:19:46.630 ], 00:19:46.630 "driver_specific": {} 00:19:46.630 } 00:19:46.630 ] 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # return 0 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.630 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:46.630 "name": "Existed_Raid", 00:19:46.630 "uuid": "c15bd5c7-ab1a-4ef6-af76-b26ebf324556", 00:19:46.630 "strip_size_kb": 0, 00:19:46.630 "state": "configuring", 00:19:46.630 "raid_level": "raid1", 00:19:46.630 "superblock": true, 00:19:46.630 "num_base_bdevs": 2, 00:19:46.630 "num_base_bdevs_discovered": 1, 00:19:46.631 "num_base_bdevs_operational": 2, 00:19:46.631 "base_bdevs_list": [ 00:19:46.631 { 00:19:46.631 "name": "BaseBdev1", 00:19:46.631 "uuid": "de7aa482-04ed-4291-9485-f2a17e59684d", 00:19:46.631 "is_configured": true, 00:19:46.631 "data_offset": 256, 00:19:46.631 "data_size": 7936 00:19:46.631 }, 00:19:46.631 { 00:19:46.631 "name": "BaseBdev2", 00:19:46.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.631 "is_configured": false, 00:19:46.631 "data_offset": 0, 00:19:46.631 "data_size": 0 00:19:46.631 } 00:19:46.631 ] 00:19:46.631 }' 00:19:46.631 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:46.631 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.200 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:47.200 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.200 16:33:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.200 [2024-11-05 16:34:00.004667] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:47.200 [2024-11-05 16:34:00.004727] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:47.200 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.200 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:47.200 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.200 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.200 [2024-11-05 16:34:00.016690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:47.200 [2024-11-05 16:34:00.018734] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:47.200 [2024-11-05 16:34:00.018778] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:47.200 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.200 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:47.200 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:47.200 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:47.200 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:47.200 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:47.200 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:47.200 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:47.200 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:47.200 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.200 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.200 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.200 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.200 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.200 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.200 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:47.200 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.200 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.200 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:47.200 "name": "Existed_Raid", 00:19:47.200 "uuid": "a79d0433-1cfb-46c3-a929-e0ff32b05728", 00:19:47.200 "strip_size_kb": 0, 00:19:47.200 "state": "configuring", 00:19:47.200 "raid_level": "raid1", 00:19:47.200 "superblock": true, 00:19:47.200 "num_base_bdevs": 2, 00:19:47.200 "num_base_bdevs_discovered": 1, 00:19:47.200 "num_base_bdevs_operational": 2, 00:19:47.200 "base_bdevs_list": [ 00:19:47.200 { 00:19:47.200 "name": "BaseBdev1", 00:19:47.200 "uuid": "de7aa482-04ed-4291-9485-f2a17e59684d", 00:19:47.200 "is_configured": true, 00:19:47.200 "data_offset": 256, 00:19:47.200 "data_size": 7936 00:19:47.200 }, 00:19:47.200 { 00:19:47.200 "name": "BaseBdev2", 00:19:47.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.200 "is_configured": false, 00:19:47.200 "data_offset": 0, 00:19:47.200 "data_size": 0 00:19:47.200 } 00:19:47.200 ] 00:19:47.200 }' 00:19:47.200 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:47.200 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.460 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:19:47.460 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.460 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.460 [2024-11-05 16:34:00.491525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:47.460 [2024-11-05 16:34:00.491877] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:47.460 [2024-11-05 16:34:00.491927] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:47.460 [2024-11-05 16:34:00.492039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:47.460 [2024-11-05 16:34:00.492216] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:47.460 [2024-11-05 16:34:00.492262] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:47.460 BaseBdev2 00:19:47.460 [2024-11-05 16:34:00.492430] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:47.460 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.460 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:47.461 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:19:47.461 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:47.461 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local i 00:19:47.461 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:47.461 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:47.461 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:47.461 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.461 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.461 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.461 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:47.461 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.461 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.461 [ 00:19:47.461 { 00:19:47.461 "name": "BaseBdev2", 00:19:47.461 "aliases": [ 00:19:47.461 "47a7589c-1ab5-4330-8cce-f97b05f38a75" 00:19:47.461 ], 00:19:47.461 "product_name": "Malloc disk", 00:19:47.461 "block_size": 4096, 00:19:47.461 "num_blocks": 8192, 00:19:47.461 "uuid": "47a7589c-1ab5-4330-8cce-f97b05f38a75", 00:19:47.461 "md_size": 32, 00:19:47.461 "md_interleave": false, 00:19:47.461 "dif_type": 0, 00:19:47.461 "assigned_rate_limits": { 00:19:47.461 "rw_ios_per_sec": 0, 00:19:47.461 "rw_mbytes_per_sec": 0, 00:19:47.461 "r_mbytes_per_sec": 0, 00:19:47.461 "w_mbytes_per_sec": 0 00:19:47.461 }, 00:19:47.461 "claimed": true, 00:19:47.461 "claim_type": "exclusive_write", 00:19:47.461 "zoned": false, 00:19:47.461 "supported_io_types": { 00:19:47.461 "read": true, 00:19:47.461 "write": true, 00:19:47.461 "unmap": true, 00:19:47.461 "flush": true, 00:19:47.461 "reset": true, 00:19:47.461 "nvme_admin": false, 00:19:47.461 "nvme_io": false, 00:19:47.461 "nvme_io_md": false, 00:19:47.461 "write_zeroes": true, 00:19:47.461 "zcopy": true, 00:19:47.461 "get_zone_info": false, 00:19:47.461 "zone_management": false, 00:19:47.461 "zone_append": false, 00:19:47.461 "compare": false, 00:19:47.461 "compare_and_write": false, 00:19:47.461 "abort": true, 00:19:47.461 "seek_hole": false, 00:19:47.461 "seek_data": false, 00:19:47.461 "copy": true, 00:19:47.461 "nvme_iov_md": false 00:19:47.461 }, 00:19:47.461 "memory_domains": [ 00:19:47.461 { 00:19:47.461 "dma_device_id": "system", 00:19:47.461 "dma_device_type": 1 00:19:47.461 }, 00:19:47.461 { 00:19:47.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.461 "dma_device_type": 2 00:19:47.461 } 00:19:47.461 ], 00:19:47.461 "driver_specific": {} 00:19:47.461 } 00:19:47.461 ] 00:19:47.461 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.461 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # return 0 00:19:47.461 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:47.461 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:47.461 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:47.461 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:47.461 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:47.461 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:47.461 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:47.461 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:47.461 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.461 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.461 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.461 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.461 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.461 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:47.461 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.461 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.721 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.721 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:47.721 "name": "Existed_Raid", 00:19:47.721 "uuid": "a79d0433-1cfb-46c3-a929-e0ff32b05728", 00:19:47.721 "strip_size_kb": 0, 00:19:47.721 "state": "online", 00:19:47.721 "raid_level": "raid1", 00:19:47.721 "superblock": true, 00:19:47.721 "num_base_bdevs": 2, 00:19:47.721 "num_base_bdevs_discovered": 2, 00:19:47.721 "num_base_bdevs_operational": 2, 00:19:47.721 "base_bdevs_list": [ 00:19:47.721 { 00:19:47.721 "name": "BaseBdev1", 00:19:47.721 "uuid": "de7aa482-04ed-4291-9485-f2a17e59684d", 00:19:47.721 "is_configured": true, 00:19:47.721 "data_offset": 256, 00:19:47.721 "data_size": 7936 00:19:47.721 }, 00:19:47.721 { 00:19:47.721 "name": "BaseBdev2", 00:19:47.721 "uuid": "47a7589c-1ab5-4330-8cce-f97b05f38a75", 00:19:47.721 "is_configured": true, 00:19:47.721 "data_offset": 256, 00:19:47.721 "data_size": 7936 00:19:47.721 } 00:19:47.721 ] 00:19:47.721 }' 00:19:47.721 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:47.721 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.981 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:47.981 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:47.981 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:47.981 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:47.981 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:47.981 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:47.981 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:47.981 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.981 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.981 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:47.981 [2024-11-05 16:34:00.987050] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:47.981 16:34:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.981 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:47.981 "name": "Existed_Raid", 00:19:47.981 "aliases": [ 00:19:47.981 "a79d0433-1cfb-46c3-a929-e0ff32b05728" 00:19:47.981 ], 00:19:47.981 "product_name": "Raid Volume", 00:19:47.981 "block_size": 4096, 00:19:47.981 "num_blocks": 7936, 00:19:47.981 "uuid": "a79d0433-1cfb-46c3-a929-e0ff32b05728", 00:19:47.981 "md_size": 32, 00:19:47.981 "md_interleave": false, 00:19:47.981 "dif_type": 0, 00:19:47.981 "assigned_rate_limits": { 00:19:47.981 "rw_ios_per_sec": 0, 00:19:47.981 "rw_mbytes_per_sec": 0, 00:19:47.981 "r_mbytes_per_sec": 0, 00:19:47.981 "w_mbytes_per_sec": 0 00:19:47.981 }, 00:19:47.981 "claimed": false, 00:19:47.981 "zoned": false, 00:19:47.981 "supported_io_types": { 00:19:47.981 "read": true, 00:19:47.981 "write": true, 00:19:47.981 "unmap": false, 00:19:47.981 "flush": false, 00:19:47.981 "reset": true, 00:19:47.981 "nvme_admin": false, 00:19:47.981 "nvme_io": false, 00:19:47.981 "nvme_io_md": false, 00:19:47.981 "write_zeroes": true, 00:19:47.981 "zcopy": false, 00:19:47.981 "get_zone_info": false, 00:19:47.981 "zone_management": false, 00:19:47.981 "zone_append": false, 00:19:47.981 "compare": false, 00:19:47.981 "compare_and_write": false, 00:19:47.981 "abort": false, 00:19:47.981 "seek_hole": false, 00:19:47.981 "seek_data": false, 00:19:47.981 "copy": false, 00:19:47.981 "nvme_iov_md": false 00:19:47.981 }, 00:19:47.981 "memory_domains": [ 00:19:47.981 { 00:19:47.981 "dma_device_id": "system", 00:19:47.981 "dma_device_type": 1 00:19:47.981 }, 00:19:47.981 { 00:19:47.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.981 "dma_device_type": 2 00:19:47.981 }, 00:19:47.981 { 00:19:47.981 "dma_device_id": "system", 00:19:47.981 "dma_device_type": 1 00:19:47.981 }, 00:19:47.981 { 00:19:47.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.981 "dma_device_type": 2 00:19:47.981 } 00:19:47.981 ], 00:19:47.981 "driver_specific": { 00:19:47.981 "raid": { 00:19:47.981 "uuid": "a79d0433-1cfb-46c3-a929-e0ff32b05728", 00:19:47.981 "strip_size_kb": 0, 00:19:47.981 "state": "online", 00:19:47.981 "raid_level": "raid1", 00:19:47.981 "superblock": true, 00:19:47.981 "num_base_bdevs": 2, 00:19:47.981 "num_base_bdevs_discovered": 2, 00:19:47.981 "num_base_bdevs_operational": 2, 00:19:47.981 "base_bdevs_list": [ 00:19:47.981 { 00:19:47.981 "name": "BaseBdev1", 00:19:47.981 "uuid": "de7aa482-04ed-4291-9485-f2a17e59684d", 00:19:47.981 "is_configured": true, 00:19:47.981 "data_offset": 256, 00:19:47.981 "data_size": 7936 00:19:47.981 }, 00:19:47.981 { 00:19:47.981 "name": "BaseBdev2", 00:19:47.981 "uuid": "47a7589c-1ab5-4330-8cce-f97b05f38a75", 00:19:47.981 "is_configured": true, 00:19:47.981 "data_offset": 256, 00:19:47.981 "data_size": 7936 00:19:47.981 } 00:19:47.981 ] 00:19:47.981 } 00:19:47.981 } 00:19:47.981 }' 00:19:47.981 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:48.241 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:48.241 BaseBdev2' 00:19:48.241 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:48.241 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:48.241 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:48.241 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:48.241 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:48.241 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.241 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:48.241 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.241 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:48.241 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:48.241 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:48.241 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:48.241 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:48.241 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.241 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:48.241 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.241 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:48.241 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:48.241 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:48.241 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.241 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:48.241 [2024-11-05 16:34:01.238369] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:48.500 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.500 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:48.500 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:48.500 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:48.500 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:19:48.500 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:48.500 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:48.500 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:48.500 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:48.500 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:48.500 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:48.500 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:48.500 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:48.500 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:48.500 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:48.500 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:48.500 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.500 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:48.500 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.500 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:48.500 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.500 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:48.500 "name": "Existed_Raid", 00:19:48.500 "uuid": "a79d0433-1cfb-46c3-a929-e0ff32b05728", 00:19:48.500 "strip_size_kb": 0, 00:19:48.500 "state": "online", 00:19:48.500 "raid_level": "raid1", 00:19:48.500 "superblock": true, 00:19:48.500 "num_base_bdevs": 2, 00:19:48.500 "num_base_bdevs_discovered": 1, 00:19:48.500 "num_base_bdevs_operational": 1, 00:19:48.500 "base_bdevs_list": [ 00:19:48.500 { 00:19:48.500 "name": null, 00:19:48.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.500 "is_configured": false, 00:19:48.500 "data_offset": 0, 00:19:48.500 "data_size": 7936 00:19:48.500 }, 00:19:48.500 { 00:19:48.500 "name": "BaseBdev2", 00:19:48.500 "uuid": "47a7589c-1ab5-4330-8cce-f97b05f38a75", 00:19:48.500 "is_configured": true, 00:19:48.500 "data_offset": 256, 00:19:48.500 "data_size": 7936 00:19:48.500 } 00:19:48.500 ] 00:19:48.500 }' 00:19:48.500 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:48.500 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:48.773 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:48.773 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:48.773 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.773 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:48.773 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.773 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:48.773 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.773 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:48.773 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:48.773 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:48.773 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.773 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:49.047 [2024-11-05 16:34:01.854783] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:49.047 [2024-11-05 16:34:01.855046] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:49.047 [2024-11-05 16:34:01.982010] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:49.047 [2024-11-05 16:34:01.982083] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:49.047 [2024-11-05 16:34:01.982100] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:49.047 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.047 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:49.047 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:49.047 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:49.047 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.047 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.047 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:49.047 16:34:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.047 16:34:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:49.047 16:34:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:49.047 16:34:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:49.047 16:34:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87557 00:19:49.047 16:34:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # '[' -z 87557 ']' 00:19:49.047 16:34:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # kill -0 87557 00:19:49.047 16:34:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # uname 00:19:49.047 16:34:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:49.047 16:34:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87557 00:19:49.047 16:34:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:49.047 16:34:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:49.047 16:34:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87557' 00:19:49.047 killing process with pid 87557 00:19:49.047 16:34:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@971 -- # kill 87557 00:19:49.047 [2024-11-05 16:34:02.077759] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:49.047 16:34:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@976 -- # wait 87557 00:19:49.047 [2024-11-05 16:34:02.098459] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:50.429 16:34:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:19:50.429 00:19:50.429 real 0m5.250s 00:19:50.429 user 0m7.566s 00:19:50.429 sys 0m0.826s 00:19:50.429 16:34:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:50.429 16:34:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.429 ************************************ 00:19:50.429 END TEST raid_state_function_test_sb_md_separate 00:19:50.429 ************************************ 00:19:50.429 16:34:03 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:19:50.429 16:34:03 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:19:50.429 16:34:03 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:50.429 16:34:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:50.429 ************************************ 00:19:50.429 START TEST raid_superblock_test_md_separate 00:19:50.429 ************************************ 00:19:50.429 16:34:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:19:50.429 16:34:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:50.429 16:34:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:50.429 16:34:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:50.429 16:34:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:50.429 16:34:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:50.429 16:34:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:50.429 16:34:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:50.429 16:34:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:50.429 16:34:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:50.429 16:34:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:50.429 16:34:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:50.429 16:34:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:50.429 16:34:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:50.429 16:34:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:50.429 16:34:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:50.429 16:34:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:50.429 16:34:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87807 00:19:50.429 16:34:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87807 00:19:50.429 16:34:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@833 -- # '[' -z 87807 ']' 00:19:50.429 16:34:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.429 16:34:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:50.429 16:34:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.429 16:34:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:50.429 16:34:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.429 [2024-11-05 16:34:03.443085] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:19:50.429 [2024-11-05 16:34:03.443287] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87807 ] 00:19:50.689 [2024-11-05 16:34:03.615250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.689 [2024-11-05 16:34:03.755024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.948 [2024-11-05 16:34:04.004349] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:50.948 [2024-11-05 16:34:04.004544] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@866 -- # return 0 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:51.518 malloc1 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:51.518 [2024-11-05 16:34:04.356088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:51.518 [2024-11-05 16:34:04.356253] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:51.518 [2024-11-05 16:34:04.356286] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:51.518 [2024-11-05 16:34:04.356299] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:51.518 [2024-11-05 16:34:04.358495] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:51.518 [2024-11-05 16:34:04.358556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:51.518 pt1 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:51.518 malloc2 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:51.518 [2024-11-05 16:34:04.421646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:51.518 [2024-11-05 16:34:04.421713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:51.518 [2024-11-05 16:34:04.421739] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:51.518 [2024-11-05 16:34:04.421750] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:51.518 [2024-11-05 16:34:04.423900] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:51.518 [2024-11-05 16:34:04.423940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:51.518 pt2 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:51.518 [2024-11-05 16:34:04.433657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:51.518 [2024-11-05 16:34:04.435853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:51.518 [2024-11-05 16:34:04.436058] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:51.518 [2024-11-05 16:34:04.436085] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:51.518 [2024-11-05 16:34:04.436172] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:51.518 [2024-11-05 16:34:04.436308] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:51.518 [2024-11-05 16:34:04.436323] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:51.518 [2024-11-05 16:34:04.436433] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:51.518 "name": "raid_bdev1", 00:19:51.518 "uuid": "896321da-7411-4936-ad65-c021b0a41c9d", 00:19:51.518 "strip_size_kb": 0, 00:19:51.518 "state": "online", 00:19:51.518 "raid_level": "raid1", 00:19:51.518 "superblock": true, 00:19:51.518 "num_base_bdevs": 2, 00:19:51.518 "num_base_bdevs_discovered": 2, 00:19:51.518 "num_base_bdevs_operational": 2, 00:19:51.518 "base_bdevs_list": [ 00:19:51.518 { 00:19:51.518 "name": "pt1", 00:19:51.518 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:51.518 "is_configured": true, 00:19:51.518 "data_offset": 256, 00:19:51.518 "data_size": 7936 00:19:51.518 }, 00:19:51.518 { 00:19:51.518 "name": "pt2", 00:19:51.518 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:51.518 "is_configured": true, 00:19:51.518 "data_offset": 256, 00:19:51.518 "data_size": 7936 00:19:51.518 } 00:19:51.518 ] 00:19:51.518 }' 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:51.518 16:34:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:51.778 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:51.778 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:51.778 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:51.778 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:51.778 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:51.778 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:51.778 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:51.778 16:34:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.778 16:34:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:51.778 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:51.778 [2024-11-05 16:34:04.865238] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:52.038 16:34:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.038 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:52.038 "name": "raid_bdev1", 00:19:52.038 "aliases": [ 00:19:52.038 "896321da-7411-4936-ad65-c021b0a41c9d" 00:19:52.038 ], 00:19:52.038 "product_name": "Raid Volume", 00:19:52.038 "block_size": 4096, 00:19:52.038 "num_blocks": 7936, 00:19:52.038 "uuid": "896321da-7411-4936-ad65-c021b0a41c9d", 00:19:52.038 "md_size": 32, 00:19:52.038 "md_interleave": false, 00:19:52.038 "dif_type": 0, 00:19:52.038 "assigned_rate_limits": { 00:19:52.038 "rw_ios_per_sec": 0, 00:19:52.038 "rw_mbytes_per_sec": 0, 00:19:52.038 "r_mbytes_per_sec": 0, 00:19:52.038 "w_mbytes_per_sec": 0 00:19:52.038 }, 00:19:52.038 "claimed": false, 00:19:52.038 "zoned": false, 00:19:52.038 "supported_io_types": { 00:19:52.038 "read": true, 00:19:52.038 "write": true, 00:19:52.038 "unmap": false, 00:19:52.038 "flush": false, 00:19:52.038 "reset": true, 00:19:52.038 "nvme_admin": false, 00:19:52.038 "nvme_io": false, 00:19:52.038 "nvme_io_md": false, 00:19:52.038 "write_zeroes": true, 00:19:52.038 "zcopy": false, 00:19:52.038 "get_zone_info": false, 00:19:52.038 "zone_management": false, 00:19:52.038 "zone_append": false, 00:19:52.038 "compare": false, 00:19:52.038 "compare_and_write": false, 00:19:52.038 "abort": false, 00:19:52.038 "seek_hole": false, 00:19:52.038 "seek_data": false, 00:19:52.038 "copy": false, 00:19:52.038 "nvme_iov_md": false 00:19:52.038 }, 00:19:52.038 "memory_domains": [ 00:19:52.038 { 00:19:52.038 "dma_device_id": "system", 00:19:52.038 "dma_device_type": 1 00:19:52.038 }, 00:19:52.038 { 00:19:52.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:52.038 "dma_device_type": 2 00:19:52.038 }, 00:19:52.038 { 00:19:52.038 "dma_device_id": "system", 00:19:52.038 "dma_device_type": 1 00:19:52.038 }, 00:19:52.038 { 00:19:52.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:52.038 "dma_device_type": 2 00:19:52.038 } 00:19:52.038 ], 00:19:52.038 "driver_specific": { 00:19:52.038 "raid": { 00:19:52.038 "uuid": "896321da-7411-4936-ad65-c021b0a41c9d", 00:19:52.038 "strip_size_kb": 0, 00:19:52.038 "state": "online", 00:19:52.038 "raid_level": "raid1", 00:19:52.038 "superblock": true, 00:19:52.038 "num_base_bdevs": 2, 00:19:52.038 "num_base_bdevs_discovered": 2, 00:19:52.038 "num_base_bdevs_operational": 2, 00:19:52.038 "base_bdevs_list": [ 00:19:52.038 { 00:19:52.038 "name": "pt1", 00:19:52.038 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:52.038 "is_configured": true, 00:19:52.038 "data_offset": 256, 00:19:52.038 "data_size": 7936 00:19:52.038 }, 00:19:52.038 { 00:19:52.038 "name": "pt2", 00:19:52.038 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:52.038 "is_configured": true, 00:19:52.038 "data_offset": 256, 00:19:52.038 "data_size": 7936 00:19:52.038 } 00:19:52.038 ] 00:19:52.038 } 00:19:52.038 } 00:19:52.038 }' 00:19:52.038 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:52.038 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:52.038 pt2' 00:19:52.038 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:52.038 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:52.038 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:52.038 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:52.038 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:52.038 16:34:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.038 16:34:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.038 16:34:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.038 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:52.038 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:52.038 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:52.038 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:52.038 16:34:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:52.038 16:34:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.038 16:34:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.038 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.038 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:52.038 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:52.038 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:52.038 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.038 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.038 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:52.038 [2024-11-05 16:34:05.036890] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:52.038 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.038 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=896321da-7411-4936-ad65-c021b0a41c9d 00:19:52.038 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 896321da-7411-4936-ad65-c021b0a41c9d ']' 00:19:52.038 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:52.038 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.038 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.038 [2024-11-05 16:34:05.084635] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:52.038 [2024-11-05 16:34:05.084666] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:52.038 [2024-11-05 16:34:05.084770] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:52.038 [2024-11-05 16:34:05.084839] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:52.038 [2024-11-05 16:34:05.084853] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:52.038 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.038 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.038 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:52.039 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.039 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.039 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.298 [2024-11-05 16:34:05.224656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:52.298 [2024-11-05 16:34:05.226838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:52.298 [2024-11-05 16:34:05.227040] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:52.298 [2024-11-05 16:34:05.227109] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:52.298 [2024-11-05 16:34:05.227126] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:52.298 [2024-11-05 16:34:05.227139] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:52.298 request: 00:19:52.298 { 00:19:52.298 "name": "raid_bdev1", 00:19:52.298 "raid_level": "raid1", 00:19:52.298 "base_bdevs": [ 00:19:52.298 "malloc1", 00:19:52.298 "malloc2" 00:19:52.298 ], 00:19:52.298 "superblock": false, 00:19:52.298 "method": "bdev_raid_create", 00:19:52.298 "req_id": 1 00:19:52.298 } 00:19:52.298 Got JSON-RPC error response 00:19:52.298 response: 00:19:52.298 { 00:19:52.298 "code": -17, 00:19:52.298 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:52.298 } 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.298 [2024-11-05 16:34:05.284602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:52.298 [2024-11-05 16:34:05.284744] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:52.298 [2024-11-05 16:34:05.284784] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:52.298 [2024-11-05 16:34:05.284825] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:52.298 [2024-11-05 16:34:05.287166] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:52.298 [2024-11-05 16:34:05.287262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:52.298 [2024-11-05 16:34:05.287353] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:52.298 [2024-11-05 16:34:05.287445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:52.298 pt1 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.298 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:52.298 "name": "raid_bdev1", 00:19:52.299 "uuid": "896321da-7411-4936-ad65-c021b0a41c9d", 00:19:52.299 "strip_size_kb": 0, 00:19:52.299 "state": "configuring", 00:19:52.299 "raid_level": "raid1", 00:19:52.299 "superblock": true, 00:19:52.299 "num_base_bdevs": 2, 00:19:52.299 "num_base_bdevs_discovered": 1, 00:19:52.299 "num_base_bdevs_operational": 2, 00:19:52.299 "base_bdevs_list": [ 00:19:52.299 { 00:19:52.299 "name": "pt1", 00:19:52.299 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:52.299 "is_configured": true, 00:19:52.299 "data_offset": 256, 00:19:52.299 "data_size": 7936 00:19:52.299 }, 00:19:52.299 { 00:19:52.299 "name": null, 00:19:52.299 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:52.299 "is_configured": false, 00:19:52.299 "data_offset": 256, 00:19:52.299 "data_size": 7936 00:19:52.299 } 00:19:52.299 ] 00:19:52.299 }' 00:19:52.299 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:52.299 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.867 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:52.867 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:52.867 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:52.867 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:52.867 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.867 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.867 [2024-11-05 16:34:05.715849] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:52.867 [2024-11-05 16:34:05.716038] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:52.867 [2024-11-05 16:34:05.716080] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:52.868 [2024-11-05 16:34:05.716096] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:52.868 [2024-11-05 16:34:05.716397] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:52.868 [2024-11-05 16:34:05.716418] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:52.868 [2024-11-05 16:34:05.716485] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:52.868 [2024-11-05 16:34:05.716513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:52.868 [2024-11-05 16:34:05.716680] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:52.868 [2024-11-05 16:34:05.716696] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:52.868 [2024-11-05 16:34:05.716779] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:52.868 [2024-11-05 16:34:05.716918] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:52.868 [2024-11-05 16:34:05.716929] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:52.868 [2024-11-05 16:34:05.717068] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:52.868 pt2 00:19:52.868 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.868 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:52.868 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:52.868 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:52.868 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:52.868 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:52.868 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:52.868 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:52.868 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:52.868 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:52.868 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:52.868 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:52.868 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:52.868 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.868 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.868 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.868 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.868 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.868 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:52.868 "name": "raid_bdev1", 00:19:52.868 "uuid": "896321da-7411-4936-ad65-c021b0a41c9d", 00:19:52.868 "strip_size_kb": 0, 00:19:52.868 "state": "online", 00:19:52.868 "raid_level": "raid1", 00:19:52.868 "superblock": true, 00:19:52.868 "num_base_bdevs": 2, 00:19:52.868 "num_base_bdevs_discovered": 2, 00:19:52.868 "num_base_bdevs_operational": 2, 00:19:52.868 "base_bdevs_list": [ 00:19:52.868 { 00:19:52.868 "name": "pt1", 00:19:52.868 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:52.868 "is_configured": true, 00:19:52.868 "data_offset": 256, 00:19:52.868 "data_size": 7936 00:19:52.868 }, 00:19:52.868 { 00:19:52.868 "name": "pt2", 00:19:52.868 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:52.868 "is_configured": true, 00:19:52.868 "data_offset": 256, 00:19:52.868 "data_size": 7936 00:19:52.868 } 00:19:52.868 ] 00:19:52.868 }' 00:19:52.868 16:34:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:52.868 16:34:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.128 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:53.128 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:53.128 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:53.128 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:53.128 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:53.128 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:53.128 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:53.128 16:34:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.128 16:34:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.128 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:53.128 [2024-11-05 16:34:06.111610] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:53.128 16:34:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.128 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:53.128 "name": "raid_bdev1", 00:19:53.128 "aliases": [ 00:19:53.128 "896321da-7411-4936-ad65-c021b0a41c9d" 00:19:53.128 ], 00:19:53.128 "product_name": "Raid Volume", 00:19:53.128 "block_size": 4096, 00:19:53.128 "num_blocks": 7936, 00:19:53.128 "uuid": "896321da-7411-4936-ad65-c021b0a41c9d", 00:19:53.128 "md_size": 32, 00:19:53.128 "md_interleave": false, 00:19:53.128 "dif_type": 0, 00:19:53.128 "assigned_rate_limits": { 00:19:53.128 "rw_ios_per_sec": 0, 00:19:53.128 "rw_mbytes_per_sec": 0, 00:19:53.128 "r_mbytes_per_sec": 0, 00:19:53.128 "w_mbytes_per_sec": 0 00:19:53.128 }, 00:19:53.128 "claimed": false, 00:19:53.128 "zoned": false, 00:19:53.128 "supported_io_types": { 00:19:53.128 "read": true, 00:19:53.128 "write": true, 00:19:53.128 "unmap": false, 00:19:53.128 "flush": false, 00:19:53.128 "reset": true, 00:19:53.128 "nvme_admin": false, 00:19:53.128 "nvme_io": false, 00:19:53.128 "nvme_io_md": false, 00:19:53.128 "write_zeroes": true, 00:19:53.128 "zcopy": false, 00:19:53.128 "get_zone_info": false, 00:19:53.128 "zone_management": false, 00:19:53.128 "zone_append": false, 00:19:53.128 "compare": false, 00:19:53.128 "compare_and_write": false, 00:19:53.128 "abort": false, 00:19:53.128 "seek_hole": false, 00:19:53.128 "seek_data": false, 00:19:53.128 "copy": false, 00:19:53.128 "nvme_iov_md": false 00:19:53.128 }, 00:19:53.128 "memory_domains": [ 00:19:53.128 { 00:19:53.128 "dma_device_id": "system", 00:19:53.128 "dma_device_type": 1 00:19:53.128 }, 00:19:53.128 { 00:19:53.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:53.128 "dma_device_type": 2 00:19:53.128 }, 00:19:53.128 { 00:19:53.128 "dma_device_id": "system", 00:19:53.128 "dma_device_type": 1 00:19:53.128 }, 00:19:53.128 { 00:19:53.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:53.128 "dma_device_type": 2 00:19:53.128 } 00:19:53.128 ], 00:19:53.128 "driver_specific": { 00:19:53.128 "raid": { 00:19:53.128 "uuid": "896321da-7411-4936-ad65-c021b0a41c9d", 00:19:53.128 "strip_size_kb": 0, 00:19:53.128 "state": "online", 00:19:53.128 "raid_level": "raid1", 00:19:53.128 "superblock": true, 00:19:53.128 "num_base_bdevs": 2, 00:19:53.128 "num_base_bdevs_discovered": 2, 00:19:53.128 "num_base_bdevs_operational": 2, 00:19:53.128 "base_bdevs_list": [ 00:19:53.128 { 00:19:53.128 "name": "pt1", 00:19:53.128 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:53.128 "is_configured": true, 00:19:53.128 "data_offset": 256, 00:19:53.128 "data_size": 7936 00:19:53.128 }, 00:19:53.128 { 00:19:53.128 "name": "pt2", 00:19:53.128 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:53.128 "is_configured": true, 00:19:53.128 "data_offset": 256, 00:19:53.128 "data_size": 7936 00:19:53.128 } 00:19:53.128 ] 00:19:53.128 } 00:19:53.128 } 00:19:53.128 }' 00:19:53.128 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:53.128 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:53.128 pt2' 00:19:53.128 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.388 [2024-11-05 16:34:06.355179] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 896321da-7411-4936-ad65-c021b0a41c9d '!=' 896321da-7411-4936-ad65-c021b0a41c9d ']' 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.388 [2024-11-05 16:34:06.386906] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.388 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:53.388 "name": "raid_bdev1", 00:19:53.388 "uuid": "896321da-7411-4936-ad65-c021b0a41c9d", 00:19:53.388 "strip_size_kb": 0, 00:19:53.388 "state": "online", 00:19:53.388 "raid_level": "raid1", 00:19:53.388 "superblock": true, 00:19:53.388 "num_base_bdevs": 2, 00:19:53.388 "num_base_bdevs_discovered": 1, 00:19:53.388 "num_base_bdevs_operational": 1, 00:19:53.388 "base_bdevs_list": [ 00:19:53.388 { 00:19:53.388 "name": null, 00:19:53.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.389 "is_configured": false, 00:19:53.389 "data_offset": 0, 00:19:53.389 "data_size": 7936 00:19:53.389 }, 00:19:53.389 { 00:19:53.389 "name": "pt2", 00:19:53.389 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:53.389 "is_configured": true, 00:19:53.389 "data_offset": 256, 00:19:53.389 "data_size": 7936 00:19:53.389 } 00:19:53.389 ] 00:19:53.389 }' 00:19:53.389 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:53.389 16:34:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.958 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:53.958 16:34:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.958 16:34:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.958 [2024-11-05 16:34:06.842553] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:53.958 [2024-11-05 16:34:06.842706] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:53.958 [2024-11-05 16:34:06.842841] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:53.958 [2024-11-05 16:34:06.842932] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:53.958 [2024-11-05 16:34:06.842985] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:53.958 16:34:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.958 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.958 16:34:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.958 16:34:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.958 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:53.958 16:34:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.958 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:53.958 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:53.958 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:53.958 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:53.958 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:53.958 16:34:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.958 16:34:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.958 16:34:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.958 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:53.958 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:53.958 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:53.958 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:53.958 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:19:53.958 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:53.958 16:34:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.958 16:34:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.958 [2024-11-05 16:34:06.914349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:53.958 [2024-11-05 16:34:06.914439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:53.958 [2024-11-05 16:34:06.914461] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:53.958 [2024-11-05 16:34:06.914475] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:53.958 [2024-11-05 16:34:06.916913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:53.958 [2024-11-05 16:34:06.916961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:53.958 [2024-11-05 16:34:06.917026] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:53.958 [2024-11-05 16:34:06.917087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:53.958 [2024-11-05 16:34:06.917198] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:53.958 [2024-11-05 16:34:06.917212] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:53.958 [2024-11-05 16:34:06.917300] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:53.958 [2024-11-05 16:34:06.917438] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:53.958 [2024-11-05 16:34:06.917457] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:53.958 [2024-11-05 16:34:06.917582] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:53.958 pt2 00:19:53.958 16:34:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.958 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:53.958 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:53.958 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:53.958 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:53.958 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:53.958 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:53.958 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:53.958 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:53.958 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:53.959 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:53.959 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.959 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.959 16:34:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.959 16:34:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.959 16:34:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.959 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:53.959 "name": "raid_bdev1", 00:19:53.959 "uuid": "896321da-7411-4936-ad65-c021b0a41c9d", 00:19:53.959 "strip_size_kb": 0, 00:19:53.959 "state": "online", 00:19:53.959 "raid_level": "raid1", 00:19:53.959 "superblock": true, 00:19:53.959 "num_base_bdevs": 2, 00:19:53.959 "num_base_bdevs_discovered": 1, 00:19:53.959 "num_base_bdevs_operational": 1, 00:19:53.959 "base_bdevs_list": [ 00:19:53.959 { 00:19:53.959 "name": null, 00:19:53.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.959 "is_configured": false, 00:19:53.959 "data_offset": 256, 00:19:53.959 "data_size": 7936 00:19:53.959 }, 00:19:53.959 { 00:19:53.959 "name": "pt2", 00:19:53.959 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:53.959 "is_configured": true, 00:19:53.959 "data_offset": 256, 00:19:53.959 "data_size": 7936 00:19:53.959 } 00:19:53.959 ] 00:19:53.959 }' 00:19:53.959 16:34:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:53.959 16:34:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:54.527 16:34:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:54.527 16:34:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.527 16:34:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:54.527 [2024-11-05 16:34:07.341651] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:54.527 [2024-11-05 16:34:07.341787] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:54.527 [2024-11-05 16:34:07.341897] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:54.527 [2024-11-05 16:34:07.341978] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:54.527 [2024-11-05 16:34:07.342038] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:54.527 16:34:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.527 16:34:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.527 16:34:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:54.527 16:34:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.527 16:34:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:54.527 16:34:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.527 16:34:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:54.528 16:34:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:54.528 16:34:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:54.528 16:34:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:54.528 16:34:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.528 16:34:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:54.528 [2024-11-05 16:34:07.405702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:54.528 [2024-11-05 16:34:07.405847] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:54.528 [2024-11-05 16:34:07.405894] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:54.528 [2024-11-05 16:34:07.405940] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:54.528 [2024-11-05 16:34:07.408356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:54.528 [2024-11-05 16:34:07.408445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:54.528 [2024-11-05 16:34:07.408559] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:54.528 [2024-11-05 16:34:07.408645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:54.528 [2024-11-05 16:34:07.408842] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:54.528 [2024-11-05 16:34:07.408904] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:54.528 [2024-11-05 16:34:07.408956] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:54.528 [2024-11-05 16:34:07.409092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:54.528 [2024-11-05 16:34:07.409224] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:54.528 [2024-11-05 16:34:07.409266] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:54.528 [2024-11-05 16:34:07.409379] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:54.528 [2024-11-05 16:34:07.409556] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:54.528 [2024-11-05 16:34:07.409605] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:54.528 [2024-11-05 16:34:07.409809] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:54.528 pt1 00:19:54.528 16:34:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.528 16:34:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:54.528 16:34:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:54.528 16:34:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:54.528 16:34:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:54.528 16:34:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:54.528 16:34:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:54.528 16:34:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:54.528 16:34:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:54.528 16:34:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:54.528 16:34:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:54.528 16:34:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:54.528 16:34:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.528 16:34:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.528 16:34:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.528 16:34:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:54.528 16:34:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.528 16:34:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:54.528 "name": "raid_bdev1", 00:19:54.528 "uuid": "896321da-7411-4936-ad65-c021b0a41c9d", 00:19:54.528 "strip_size_kb": 0, 00:19:54.528 "state": "online", 00:19:54.528 "raid_level": "raid1", 00:19:54.528 "superblock": true, 00:19:54.528 "num_base_bdevs": 2, 00:19:54.528 "num_base_bdevs_discovered": 1, 00:19:54.528 "num_base_bdevs_operational": 1, 00:19:54.528 "base_bdevs_list": [ 00:19:54.528 { 00:19:54.528 "name": null, 00:19:54.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.528 "is_configured": false, 00:19:54.528 "data_offset": 256, 00:19:54.528 "data_size": 7936 00:19:54.528 }, 00:19:54.528 { 00:19:54.528 "name": "pt2", 00:19:54.528 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:54.528 "is_configured": true, 00:19:54.528 "data_offset": 256, 00:19:54.528 "data_size": 7936 00:19:54.528 } 00:19:54.528 ] 00:19:54.528 }' 00:19:54.528 16:34:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:54.528 16:34:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:54.788 16:34:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:54.788 16:34:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:54.788 16:34:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.788 16:34:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:54.788 16:34:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.048 16:34:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:55.048 16:34:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:55.048 16:34:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:55.048 16:34:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.048 16:34:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:55.048 [2024-11-05 16:34:07.905199] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:55.048 16:34:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.048 16:34:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 896321da-7411-4936-ad65-c021b0a41c9d '!=' 896321da-7411-4936-ad65-c021b0a41c9d ']' 00:19:55.048 16:34:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87807 00:19:55.048 16:34:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # '[' -z 87807 ']' 00:19:55.048 16:34:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # kill -0 87807 00:19:55.048 16:34:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # uname 00:19:55.048 16:34:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:55.048 16:34:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87807 00:19:55.048 16:34:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:55.048 killing process with pid 87807 00:19:55.048 16:34:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:55.048 16:34:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87807' 00:19:55.048 16:34:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@971 -- # kill 87807 00:19:55.048 [2024-11-05 16:34:07.980341] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:55.048 [2024-11-05 16:34:07.980454] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:55.048 16:34:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@976 -- # wait 87807 00:19:55.048 [2024-11-05 16:34:07.980515] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:55.048 [2024-11-05 16:34:07.980554] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:55.309 [2024-11-05 16:34:08.220267] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:56.725 16:34:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:19:56.725 00:19:56.725 real 0m6.036s 00:19:56.725 user 0m8.898s 00:19:56.725 sys 0m1.165s 00:19:56.725 ************************************ 00:19:56.725 END TEST raid_superblock_test_md_separate 00:19:56.725 ************************************ 00:19:56.725 16:34:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:56.725 16:34:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.725 16:34:09 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:19:56.725 16:34:09 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:19:56.725 16:34:09 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:19:56.725 16:34:09 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:56.725 16:34:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:56.725 ************************************ 00:19:56.725 START TEST raid_rebuild_test_sb_md_separate 00:19:56.725 ************************************ 00:19:56.725 16:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:19:56.725 16:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:56.725 16:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:56.725 16:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:56.725 16:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:56.725 16:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:56.725 16:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:56.725 16:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:56.725 16:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:56.725 16:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:56.725 16:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:56.725 16:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:56.725 16:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:56.725 16:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:56.725 16:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:56.725 16:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:56.725 16:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:56.725 16:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:56.725 16:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:56.725 16:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:56.725 16:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:56.725 16:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:56.725 16:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:56.725 16:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:56.725 16:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:56.725 16:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88138 00:19:56.725 16:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88138 00:19:56.725 16:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:56.725 16:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@833 -- # '[' -z 88138 ']' 00:19:56.725 16:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.725 16:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:56.725 16:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.725 16:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:56.725 16:34:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.725 [2024-11-05 16:34:09.583840] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:19:56.725 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:56.725 Zero copy mechanism will not be used. 00:19:56.725 [2024-11-05 16:34:09.584083] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88138 ] 00:19:56.725 [2024-11-05 16:34:09.744220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.989 [2024-11-05 16:34:09.879139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.248 [2024-11-05 16:34:10.108849] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:57.248 [2024-11-05 16:34:10.109063] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:57.509 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:57.509 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # return 0 00:19:57.509 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:57.509 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:19:57.509 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.509 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:57.509 BaseBdev1_malloc 00:19:57.509 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.509 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:57.509 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.509 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:57.509 [2024-11-05 16:34:10.490300] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:57.509 [2024-11-05 16:34:10.490389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.509 [2024-11-05 16:34:10.490420] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:57.509 [2024-11-05 16:34:10.490435] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.509 [2024-11-05 16:34:10.492707] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.509 [2024-11-05 16:34:10.492750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:57.509 BaseBdev1 00:19:57.509 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.509 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:57.509 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:19:57.509 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.509 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:57.509 BaseBdev2_malloc 00:19:57.509 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.509 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:57.509 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.509 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:57.509 [2024-11-05 16:34:10.555467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:57.509 [2024-11-05 16:34:10.555559] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.509 [2024-11-05 16:34:10.555585] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:57.509 [2024-11-05 16:34:10.555600] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.509 [2024-11-05 16:34:10.557862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.509 [2024-11-05 16:34:10.557994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:57.509 BaseBdev2 00:19:57.509 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.509 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:19:57.509 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.509 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:57.769 spare_malloc 00:19:57.769 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.769 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:57.769 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.769 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:57.769 spare_delay 00:19:57.769 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.769 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:57.769 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.769 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:57.769 [2024-11-05 16:34:10.639052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:57.769 [2024-11-05 16:34:10.639134] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.769 [2024-11-05 16:34:10.639159] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:57.769 [2024-11-05 16:34:10.639173] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.769 [2024-11-05 16:34:10.641397] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.769 [2024-11-05 16:34:10.641545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:57.769 spare 00:19:57.769 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.769 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:57.769 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.769 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:57.769 [2024-11-05 16:34:10.651088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:57.769 [2024-11-05 16:34:10.653257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:57.769 [2024-11-05 16:34:10.653532] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:57.769 [2024-11-05 16:34:10.653555] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:57.769 [2024-11-05 16:34:10.653634] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:57.769 [2024-11-05 16:34:10.653773] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:57.769 [2024-11-05 16:34:10.653782] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:57.769 [2024-11-05 16:34:10.653896] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:57.769 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.769 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:57.769 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:57.770 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:57.770 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:57.770 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:57.770 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:57.770 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:57.770 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:57.770 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:57.770 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:57.770 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.770 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.770 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.770 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:57.770 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.770 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:57.770 "name": "raid_bdev1", 00:19:57.770 "uuid": "6180d19e-b362-481f-bd76-32ee3f12aabd", 00:19:57.770 "strip_size_kb": 0, 00:19:57.770 "state": "online", 00:19:57.770 "raid_level": "raid1", 00:19:57.770 "superblock": true, 00:19:57.770 "num_base_bdevs": 2, 00:19:57.770 "num_base_bdevs_discovered": 2, 00:19:57.770 "num_base_bdevs_operational": 2, 00:19:57.770 "base_bdevs_list": [ 00:19:57.770 { 00:19:57.770 "name": "BaseBdev1", 00:19:57.770 "uuid": "825a9a98-d00a-57eb-aca6-1550aec9f3fa", 00:19:57.770 "is_configured": true, 00:19:57.770 "data_offset": 256, 00:19:57.770 "data_size": 7936 00:19:57.770 }, 00:19:57.770 { 00:19:57.770 "name": "BaseBdev2", 00:19:57.770 "uuid": "ca0cbed2-4ee3-5537-9553-af15fc32042b", 00:19:57.770 "is_configured": true, 00:19:57.770 "data_offset": 256, 00:19:57.770 "data_size": 7936 00:19:57.770 } 00:19:57.770 ] 00:19:57.770 }' 00:19:57.770 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:57.770 16:34:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:58.030 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:58.030 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:58.030 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.030 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:58.030 [2024-11-05 16:34:11.114919] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:58.290 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.290 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:58.290 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.290 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.290 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:58.290 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:58.290 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.290 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:58.290 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:58.290 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:58.290 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:58.290 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:58.290 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:58.290 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:58.290 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:58.290 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:58.290 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:58.290 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:19:58.290 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:58.290 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:58.290 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:58.549 [2024-11-05 16:34:11.390185] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:58.549 /dev/nbd0 00:19:58.549 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:58.550 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:58.550 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:58.550 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:19:58.550 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:58.550 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:58.550 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:58.550 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:19:58.550 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:58.550 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:58.550 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:58.550 1+0 records in 00:19:58.550 1+0 records out 00:19:58.550 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035165 s, 11.6 MB/s 00:19:58.550 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:58.550 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:19:58.550 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:58.550 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:58.550 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:19:58.550 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:58.550 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:58.550 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:58.550 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:58.550 16:34:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:19:59.119 7936+0 records in 00:19:59.119 7936+0 records out 00:19:59.119 32505856 bytes (33 MB, 31 MiB) copied, 0.667121 s, 48.7 MB/s 00:19:59.119 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:59.119 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:59.119 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:59.119 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:59.119 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:19:59.119 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:59.119 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:59.379 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:59.379 [2024-11-05 16:34:12.365460] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:59.379 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:59.379 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:59.379 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:59.379 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:59.379 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:59.379 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:59.379 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:59.379 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:59.379 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.379 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:59.379 [2024-11-05 16:34:12.385539] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:59.379 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.379 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:59.379 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:59.379 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:59.379 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:59.379 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:59.379 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:59.379 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:59.379 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:59.379 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:59.379 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:59.379 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.379 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.379 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:59.379 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.379 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.379 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:59.379 "name": "raid_bdev1", 00:19:59.379 "uuid": "6180d19e-b362-481f-bd76-32ee3f12aabd", 00:19:59.379 "strip_size_kb": 0, 00:19:59.379 "state": "online", 00:19:59.379 "raid_level": "raid1", 00:19:59.379 "superblock": true, 00:19:59.379 "num_base_bdevs": 2, 00:19:59.379 "num_base_bdevs_discovered": 1, 00:19:59.379 "num_base_bdevs_operational": 1, 00:19:59.379 "base_bdevs_list": [ 00:19:59.379 { 00:19:59.379 "name": null, 00:19:59.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.379 "is_configured": false, 00:19:59.379 "data_offset": 0, 00:19:59.379 "data_size": 7936 00:19:59.379 }, 00:19:59.379 { 00:19:59.379 "name": "BaseBdev2", 00:19:59.379 "uuid": "ca0cbed2-4ee3-5537-9553-af15fc32042b", 00:19:59.379 "is_configured": true, 00:19:59.379 "data_offset": 256, 00:19:59.379 "data_size": 7936 00:19:59.379 } 00:19:59.379 ] 00:19:59.379 }' 00:19:59.379 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:59.379 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:59.948 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:59.948 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.948 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:59.948 [2024-11-05 16:34:12.812805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:59.948 [2024-11-05 16:34:12.826623] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:19:59.948 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.948 16:34:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:59.948 [2024-11-05 16:34:12.828726] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:00.887 16:34:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:00.887 16:34:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:00.887 16:34:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:00.887 16:34:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:00.887 16:34:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:00.887 16:34:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.887 16:34:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.887 16:34:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.887 16:34:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:00.887 16:34:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.887 16:34:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:00.887 "name": "raid_bdev1", 00:20:00.887 "uuid": "6180d19e-b362-481f-bd76-32ee3f12aabd", 00:20:00.887 "strip_size_kb": 0, 00:20:00.887 "state": "online", 00:20:00.887 "raid_level": "raid1", 00:20:00.887 "superblock": true, 00:20:00.887 "num_base_bdevs": 2, 00:20:00.887 "num_base_bdevs_discovered": 2, 00:20:00.887 "num_base_bdevs_operational": 2, 00:20:00.887 "process": { 00:20:00.887 "type": "rebuild", 00:20:00.887 "target": "spare", 00:20:00.887 "progress": { 00:20:00.887 "blocks": 2560, 00:20:00.887 "percent": 32 00:20:00.887 } 00:20:00.887 }, 00:20:00.887 "base_bdevs_list": [ 00:20:00.887 { 00:20:00.887 "name": "spare", 00:20:00.887 "uuid": "0d9a47ee-9c50-5305-89b7-6bd30bd5ab53", 00:20:00.887 "is_configured": true, 00:20:00.887 "data_offset": 256, 00:20:00.887 "data_size": 7936 00:20:00.887 }, 00:20:00.887 { 00:20:00.887 "name": "BaseBdev2", 00:20:00.887 "uuid": "ca0cbed2-4ee3-5537-9553-af15fc32042b", 00:20:00.887 "is_configured": true, 00:20:00.887 "data_offset": 256, 00:20:00.887 "data_size": 7936 00:20:00.887 } 00:20:00.887 ] 00:20:00.887 }' 00:20:00.887 16:34:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:00.887 16:34:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:00.887 16:34:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:00.887 16:34:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:00.887 16:34:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:00.887 16:34:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.887 16:34:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:00.887 [2024-11-05 16:34:13.968399] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:01.147 [2024-11-05 16:34:14.034730] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:01.147 [2024-11-05 16:34:14.034854] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:01.147 [2024-11-05 16:34:14.034870] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:01.147 [2024-11-05 16:34:14.034881] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:01.147 16:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.147 16:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:01.147 16:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:01.147 16:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:01.147 16:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:01.147 16:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:01.147 16:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:01.147 16:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:01.147 16:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:01.147 16:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:01.147 16:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:01.147 16:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.147 16:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.147 16:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.147 16:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:01.147 16:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.147 16:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:01.147 "name": "raid_bdev1", 00:20:01.147 "uuid": "6180d19e-b362-481f-bd76-32ee3f12aabd", 00:20:01.147 "strip_size_kb": 0, 00:20:01.147 "state": "online", 00:20:01.147 "raid_level": "raid1", 00:20:01.147 "superblock": true, 00:20:01.147 "num_base_bdevs": 2, 00:20:01.147 "num_base_bdevs_discovered": 1, 00:20:01.147 "num_base_bdevs_operational": 1, 00:20:01.147 "base_bdevs_list": [ 00:20:01.147 { 00:20:01.147 "name": null, 00:20:01.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.147 "is_configured": false, 00:20:01.147 "data_offset": 0, 00:20:01.147 "data_size": 7936 00:20:01.147 }, 00:20:01.147 { 00:20:01.147 "name": "BaseBdev2", 00:20:01.147 "uuid": "ca0cbed2-4ee3-5537-9553-af15fc32042b", 00:20:01.147 "is_configured": true, 00:20:01.147 "data_offset": 256, 00:20:01.147 "data_size": 7936 00:20:01.147 } 00:20:01.147 ] 00:20:01.147 }' 00:20:01.147 16:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:01.147 16:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:01.716 16:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:01.716 16:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:01.716 16:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:01.716 16:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:01.716 16:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:01.716 16:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.716 16:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.716 16:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.716 16:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:01.716 16:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.716 16:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:01.716 "name": "raid_bdev1", 00:20:01.716 "uuid": "6180d19e-b362-481f-bd76-32ee3f12aabd", 00:20:01.716 "strip_size_kb": 0, 00:20:01.716 "state": "online", 00:20:01.716 "raid_level": "raid1", 00:20:01.716 "superblock": true, 00:20:01.716 "num_base_bdevs": 2, 00:20:01.716 "num_base_bdevs_discovered": 1, 00:20:01.716 "num_base_bdevs_operational": 1, 00:20:01.716 "base_bdevs_list": [ 00:20:01.716 { 00:20:01.716 "name": null, 00:20:01.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.716 "is_configured": false, 00:20:01.716 "data_offset": 0, 00:20:01.716 "data_size": 7936 00:20:01.716 }, 00:20:01.716 { 00:20:01.716 "name": "BaseBdev2", 00:20:01.716 "uuid": "ca0cbed2-4ee3-5537-9553-af15fc32042b", 00:20:01.716 "is_configured": true, 00:20:01.716 "data_offset": 256, 00:20:01.716 "data_size": 7936 00:20:01.716 } 00:20:01.716 ] 00:20:01.716 }' 00:20:01.716 16:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:01.716 16:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:01.716 16:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:01.716 16:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:01.716 16:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:01.716 16:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.716 16:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:01.716 [2024-11-05 16:34:14.650586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:01.716 [2024-11-05 16:34:14.664795] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:20:01.716 16:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.716 16:34:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:01.716 [2024-11-05 16:34:14.666827] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:02.709 16:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:02.709 16:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:02.709 16:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:02.709 16:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:02.709 16:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:02.709 16:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.709 16:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.709 16:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.709 16:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:02.709 16:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.710 16:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:02.710 "name": "raid_bdev1", 00:20:02.710 "uuid": "6180d19e-b362-481f-bd76-32ee3f12aabd", 00:20:02.710 "strip_size_kb": 0, 00:20:02.710 "state": "online", 00:20:02.710 "raid_level": "raid1", 00:20:02.710 "superblock": true, 00:20:02.710 "num_base_bdevs": 2, 00:20:02.710 "num_base_bdevs_discovered": 2, 00:20:02.710 "num_base_bdevs_operational": 2, 00:20:02.710 "process": { 00:20:02.710 "type": "rebuild", 00:20:02.710 "target": "spare", 00:20:02.710 "progress": { 00:20:02.710 "blocks": 2560, 00:20:02.710 "percent": 32 00:20:02.710 } 00:20:02.710 }, 00:20:02.710 "base_bdevs_list": [ 00:20:02.710 { 00:20:02.710 "name": "spare", 00:20:02.710 "uuid": "0d9a47ee-9c50-5305-89b7-6bd30bd5ab53", 00:20:02.710 "is_configured": true, 00:20:02.710 "data_offset": 256, 00:20:02.710 "data_size": 7936 00:20:02.710 }, 00:20:02.710 { 00:20:02.710 "name": "BaseBdev2", 00:20:02.710 "uuid": "ca0cbed2-4ee3-5537-9553-af15fc32042b", 00:20:02.710 "is_configured": true, 00:20:02.710 "data_offset": 256, 00:20:02.710 "data_size": 7936 00:20:02.710 } 00:20:02.710 ] 00:20:02.710 }' 00:20:02.710 16:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:02.710 16:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:02.710 16:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:02.975 16:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:02.975 16:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:02.975 16:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:02.975 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:02.975 16:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:02.975 16:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:02.975 16:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:02.975 16:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=727 00:20:02.975 16:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:02.975 16:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:02.975 16:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:02.975 16:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:02.975 16:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:02.975 16:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:02.975 16:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.975 16:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.975 16:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.975 16:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:02.975 16:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.975 16:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:02.975 "name": "raid_bdev1", 00:20:02.975 "uuid": "6180d19e-b362-481f-bd76-32ee3f12aabd", 00:20:02.975 "strip_size_kb": 0, 00:20:02.975 "state": "online", 00:20:02.975 "raid_level": "raid1", 00:20:02.975 "superblock": true, 00:20:02.975 "num_base_bdevs": 2, 00:20:02.975 "num_base_bdevs_discovered": 2, 00:20:02.975 "num_base_bdevs_operational": 2, 00:20:02.975 "process": { 00:20:02.975 "type": "rebuild", 00:20:02.975 "target": "spare", 00:20:02.975 "progress": { 00:20:02.975 "blocks": 2816, 00:20:02.975 "percent": 35 00:20:02.975 } 00:20:02.975 }, 00:20:02.975 "base_bdevs_list": [ 00:20:02.975 { 00:20:02.975 "name": "spare", 00:20:02.975 "uuid": "0d9a47ee-9c50-5305-89b7-6bd30bd5ab53", 00:20:02.975 "is_configured": true, 00:20:02.975 "data_offset": 256, 00:20:02.975 "data_size": 7936 00:20:02.975 }, 00:20:02.975 { 00:20:02.975 "name": "BaseBdev2", 00:20:02.975 "uuid": "ca0cbed2-4ee3-5537-9553-af15fc32042b", 00:20:02.975 "is_configured": true, 00:20:02.975 "data_offset": 256, 00:20:02.975 "data_size": 7936 00:20:02.975 } 00:20:02.975 ] 00:20:02.975 }' 00:20:02.975 16:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:02.975 16:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:02.975 16:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:02.975 16:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:02.975 16:34:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:03.913 16:34:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:03.913 16:34:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:03.913 16:34:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:03.913 16:34:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:03.913 16:34:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:03.913 16:34:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:03.913 16:34:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.913 16:34:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.913 16:34:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.913 16:34:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:03.913 16:34:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.913 16:34:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:03.913 "name": "raid_bdev1", 00:20:03.913 "uuid": "6180d19e-b362-481f-bd76-32ee3f12aabd", 00:20:03.913 "strip_size_kb": 0, 00:20:03.913 "state": "online", 00:20:03.913 "raid_level": "raid1", 00:20:03.913 "superblock": true, 00:20:03.913 "num_base_bdevs": 2, 00:20:03.913 "num_base_bdevs_discovered": 2, 00:20:03.913 "num_base_bdevs_operational": 2, 00:20:03.913 "process": { 00:20:03.913 "type": "rebuild", 00:20:03.913 "target": "spare", 00:20:03.913 "progress": { 00:20:03.913 "blocks": 5632, 00:20:03.914 "percent": 70 00:20:03.914 } 00:20:03.914 }, 00:20:03.914 "base_bdevs_list": [ 00:20:03.914 { 00:20:03.914 "name": "spare", 00:20:03.914 "uuid": "0d9a47ee-9c50-5305-89b7-6bd30bd5ab53", 00:20:03.914 "is_configured": true, 00:20:03.914 "data_offset": 256, 00:20:03.914 "data_size": 7936 00:20:03.914 }, 00:20:03.914 { 00:20:03.914 "name": "BaseBdev2", 00:20:03.914 "uuid": "ca0cbed2-4ee3-5537-9553-af15fc32042b", 00:20:03.914 "is_configured": true, 00:20:03.914 "data_offset": 256, 00:20:03.914 "data_size": 7936 00:20:03.914 } 00:20:03.914 ] 00:20:03.914 }' 00:20:03.914 16:34:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:04.173 16:34:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:04.173 16:34:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:04.173 16:34:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:04.173 16:34:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:04.742 [2024-11-05 16:34:17.781753] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:04.742 [2024-11-05 16:34:17.781843] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:04.742 [2024-11-05 16:34:17.781974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:05.313 "name": "raid_bdev1", 00:20:05.313 "uuid": "6180d19e-b362-481f-bd76-32ee3f12aabd", 00:20:05.313 "strip_size_kb": 0, 00:20:05.313 "state": "online", 00:20:05.313 "raid_level": "raid1", 00:20:05.313 "superblock": true, 00:20:05.313 "num_base_bdevs": 2, 00:20:05.313 "num_base_bdevs_discovered": 2, 00:20:05.313 "num_base_bdevs_operational": 2, 00:20:05.313 "base_bdevs_list": [ 00:20:05.313 { 00:20:05.313 "name": "spare", 00:20:05.313 "uuid": "0d9a47ee-9c50-5305-89b7-6bd30bd5ab53", 00:20:05.313 "is_configured": true, 00:20:05.313 "data_offset": 256, 00:20:05.313 "data_size": 7936 00:20:05.313 }, 00:20:05.313 { 00:20:05.313 "name": "BaseBdev2", 00:20:05.313 "uuid": "ca0cbed2-4ee3-5537-9553-af15fc32042b", 00:20:05.313 "is_configured": true, 00:20:05.313 "data_offset": 256, 00:20:05.313 "data_size": 7936 00:20:05.313 } 00:20:05.313 ] 00:20:05.313 }' 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:05.313 "name": "raid_bdev1", 00:20:05.313 "uuid": "6180d19e-b362-481f-bd76-32ee3f12aabd", 00:20:05.313 "strip_size_kb": 0, 00:20:05.313 "state": "online", 00:20:05.313 "raid_level": "raid1", 00:20:05.313 "superblock": true, 00:20:05.313 "num_base_bdevs": 2, 00:20:05.313 "num_base_bdevs_discovered": 2, 00:20:05.313 "num_base_bdevs_operational": 2, 00:20:05.313 "base_bdevs_list": [ 00:20:05.313 { 00:20:05.313 "name": "spare", 00:20:05.313 "uuid": "0d9a47ee-9c50-5305-89b7-6bd30bd5ab53", 00:20:05.313 "is_configured": true, 00:20:05.313 "data_offset": 256, 00:20:05.313 "data_size": 7936 00:20:05.313 }, 00:20:05.313 { 00:20:05.313 "name": "BaseBdev2", 00:20:05.313 "uuid": "ca0cbed2-4ee3-5537-9553-af15fc32042b", 00:20:05.313 "is_configured": true, 00:20:05.313 "data_offset": 256, 00:20:05.313 "data_size": 7936 00:20:05.313 } 00:20:05.313 ] 00:20:05.313 }' 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.313 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:05.573 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.573 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:05.573 "name": "raid_bdev1", 00:20:05.573 "uuid": "6180d19e-b362-481f-bd76-32ee3f12aabd", 00:20:05.573 "strip_size_kb": 0, 00:20:05.573 "state": "online", 00:20:05.573 "raid_level": "raid1", 00:20:05.573 "superblock": true, 00:20:05.573 "num_base_bdevs": 2, 00:20:05.573 "num_base_bdevs_discovered": 2, 00:20:05.573 "num_base_bdevs_operational": 2, 00:20:05.573 "base_bdevs_list": [ 00:20:05.573 { 00:20:05.573 "name": "spare", 00:20:05.573 "uuid": "0d9a47ee-9c50-5305-89b7-6bd30bd5ab53", 00:20:05.573 "is_configured": true, 00:20:05.573 "data_offset": 256, 00:20:05.573 "data_size": 7936 00:20:05.573 }, 00:20:05.573 { 00:20:05.573 "name": "BaseBdev2", 00:20:05.573 "uuid": "ca0cbed2-4ee3-5537-9553-af15fc32042b", 00:20:05.573 "is_configured": true, 00:20:05.573 "data_offset": 256, 00:20:05.573 "data_size": 7936 00:20:05.573 } 00:20:05.573 ] 00:20:05.573 }' 00:20:05.573 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:05.573 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:05.833 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:05.833 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.833 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:05.833 [2024-11-05 16:34:18.801224] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:05.833 [2024-11-05 16:34:18.801329] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:05.833 [2024-11-05 16:34:18.801445] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:05.833 [2024-11-05 16:34:18.801550] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:05.833 [2024-11-05 16:34:18.801620] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:05.833 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.833 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.833 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:20:05.833 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.833 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:05.833 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.833 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:05.833 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:05.833 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:05.833 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:05.833 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:05.833 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:05.833 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:05.833 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:05.833 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:05.833 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:20:05.833 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:05.834 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:05.834 16:34:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:06.093 /dev/nbd0 00:20:06.093 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:06.093 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:06.093 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:20:06.093 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:20:06.093 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:06.093 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:06.093 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:20:06.093 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:20:06.093 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:06.093 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:06.093 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:06.093 1+0 records in 00:20:06.093 1+0 records out 00:20:06.093 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000521651 s, 7.9 MB/s 00:20:06.093 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:06.093 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:20:06.093 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:06.093 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:06.093 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:20:06.093 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:06.093 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:06.093 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:06.352 /dev/nbd1 00:20:06.352 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:06.352 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:06.352 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:20:06.353 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:20:06.353 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:06.353 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:06.353 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:20:06.353 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:20:06.353 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:06.353 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:06.353 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:06.353 1+0 records in 00:20:06.353 1+0 records out 00:20:06.353 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378985 s, 10.8 MB/s 00:20:06.353 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:06.353 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:20:06.353 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:06.353 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:06.353 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:20:06.353 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:06.353 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:06.353 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:06.612 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:06.612 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:06.612 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:06.612 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:06.612 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:20:06.612 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:06.612 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:06.870 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:06.870 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:06.870 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:06.870 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:06.870 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:06.870 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:06.870 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:20:06.870 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:20:06.870 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:06.870 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:07.130 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:07.130 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:07.130 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:07.130 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:07.130 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:07.130 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:07.130 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:20:07.130 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:20:07.130 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:07.130 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:07.130 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.130 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:07.130 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.130 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:07.130 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.130 16:34:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:07.130 [2024-11-05 16:34:20.001689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:07.130 [2024-11-05 16:34:20.001746] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:07.130 [2024-11-05 16:34:20.001771] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:07.130 [2024-11-05 16:34:20.001781] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:07.130 [2024-11-05 16:34:20.003789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:07.130 [2024-11-05 16:34:20.003870] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:07.130 [2024-11-05 16:34:20.003941] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:07.130 [2024-11-05 16:34:20.004009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:07.130 [2024-11-05 16:34:20.004174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:07.130 spare 00:20:07.130 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.130 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:07.130 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.130 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:07.130 [2024-11-05 16:34:20.104079] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:07.130 [2024-11-05 16:34:20.104138] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:07.130 [2024-11-05 16:34:20.104283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:20:07.130 [2024-11-05 16:34:20.104452] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:07.130 [2024-11-05 16:34:20.104460] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:07.130 [2024-11-05 16:34:20.104614] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:07.130 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.130 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:07.130 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:07.130 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:07.130 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:07.130 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:07.130 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:07.130 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:07.130 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:07.130 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:07.130 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:07.130 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.130 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.130 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.130 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:07.130 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.130 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:07.130 "name": "raid_bdev1", 00:20:07.130 "uuid": "6180d19e-b362-481f-bd76-32ee3f12aabd", 00:20:07.130 "strip_size_kb": 0, 00:20:07.130 "state": "online", 00:20:07.130 "raid_level": "raid1", 00:20:07.130 "superblock": true, 00:20:07.130 "num_base_bdevs": 2, 00:20:07.130 "num_base_bdevs_discovered": 2, 00:20:07.131 "num_base_bdevs_operational": 2, 00:20:07.131 "base_bdevs_list": [ 00:20:07.131 { 00:20:07.131 "name": "spare", 00:20:07.131 "uuid": "0d9a47ee-9c50-5305-89b7-6bd30bd5ab53", 00:20:07.131 "is_configured": true, 00:20:07.131 "data_offset": 256, 00:20:07.131 "data_size": 7936 00:20:07.131 }, 00:20:07.131 { 00:20:07.131 "name": "BaseBdev2", 00:20:07.131 "uuid": "ca0cbed2-4ee3-5537-9553-af15fc32042b", 00:20:07.131 "is_configured": true, 00:20:07.131 "data_offset": 256, 00:20:07.131 "data_size": 7936 00:20:07.131 } 00:20:07.131 ] 00:20:07.131 }' 00:20:07.131 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:07.131 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:07.704 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:07.704 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:07.704 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:07.704 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:07.704 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:07.704 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.704 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.704 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.704 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:07.704 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.704 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:07.704 "name": "raid_bdev1", 00:20:07.704 "uuid": "6180d19e-b362-481f-bd76-32ee3f12aabd", 00:20:07.704 "strip_size_kb": 0, 00:20:07.705 "state": "online", 00:20:07.705 "raid_level": "raid1", 00:20:07.705 "superblock": true, 00:20:07.705 "num_base_bdevs": 2, 00:20:07.705 "num_base_bdevs_discovered": 2, 00:20:07.705 "num_base_bdevs_operational": 2, 00:20:07.705 "base_bdevs_list": [ 00:20:07.705 { 00:20:07.705 "name": "spare", 00:20:07.705 "uuid": "0d9a47ee-9c50-5305-89b7-6bd30bd5ab53", 00:20:07.705 "is_configured": true, 00:20:07.705 "data_offset": 256, 00:20:07.705 "data_size": 7936 00:20:07.705 }, 00:20:07.705 { 00:20:07.705 "name": "BaseBdev2", 00:20:07.705 "uuid": "ca0cbed2-4ee3-5537-9553-af15fc32042b", 00:20:07.705 "is_configured": true, 00:20:07.705 "data_offset": 256, 00:20:07.705 "data_size": 7936 00:20:07.705 } 00:20:07.705 ] 00:20:07.705 }' 00:20:07.705 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:07.705 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:07.705 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:07.705 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:07.705 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:07.705 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.705 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.705 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:07.705 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.705 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:07.705 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:07.705 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.705 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:07.705 [2024-11-05 16:34:20.724576] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:07.705 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.705 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:07.705 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:07.705 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:07.705 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:07.705 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:07.705 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:07.705 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:07.705 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:07.705 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:07.705 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:07.705 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.705 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.705 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.705 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:07.705 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.705 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:07.705 "name": "raid_bdev1", 00:20:07.705 "uuid": "6180d19e-b362-481f-bd76-32ee3f12aabd", 00:20:07.705 "strip_size_kb": 0, 00:20:07.705 "state": "online", 00:20:07.705 "raid_level": "raid1", 00:20:07.705 "superblock": true, 00:20:07.705 "num_base_bdevs": 2, 00:20:07.705 "num_base_bdevs_discovered": 1, 00:20:07.705 "num_base_bdevs_operational": 1, 00:20:07.705 "base_bdevs_list": [ 00:20:07.705 { 00:20:07.705 "name": null, 00:20:07.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.705 "is_configured": false, 00:20:07.705 "data_offset": 0, 00:20:07.705 "data_size": 7936 00:20:07.705 }, 00:20:07.705 { 00:20:07.705 "name": "BaseBdev2", 00:20:07.705 "uuid": "ca0cbed2-4ee3-5537-9553-af15fc32042b", 00:20:07.705 "is_configured": true, 00:20:07.705 "data_offset": 256, 00:20:07.705 "data_size": 7936 00:20:07.705 } 00:20:07.705 ] 00:20:07.705 }' 00:20:07.705 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:07.705 16:34:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:08.276 16:34:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:08.276 16:34:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.276 16:34:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:08.276 [2024-11-05 16:34:21.155831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:08.276 [2024-11-05 16:34:21.156123] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:08.276 [2024-11-05 16:34:21.156200] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:08.276 [2024-11-05 16:34:21.156286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:08.276 [2024-11-05 16:34:21.171704] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:20:08.276 16:34:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.276 16:34:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:08.276 [2024-11-05 16:34:21.173724] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:09.212 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:09.212 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:09.212 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:09.212 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:09.212 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:09.212 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.212 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.212 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.212 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:09.212 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.212 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:09.212 "name": "raid_bdev1", 00:20:09.212 "uuid": "6180d19e-b362-481f-bd76-32ee3f12aabd", 00:20:09.212 "strip_size_kb": 0, 00:20:09.212 "state": "online", 00:20:09.212 "raid_level": "raid1", 00:20:09.212 "superblock": true, 00:20:09.212 "num_base_bdevs": 2, 00:20:09.212 "num_base_bdevs_discovered": 2, 00:20:09.212 "num_base_bdevs_operational": 2, 00:20:09.212 "process": { 00:20:09.212 "type": "rebuild", 00:20:09.212 "target": "spare", 00:20:09.212 "progress": { 00:20:09.212 "blocks": 2560, 00:20:09.212 "percent": 32 00:20:09.212 } 00:20:09.212 }, 00:20:09.212 "base_bdevs_list": [ 00:20:09.212 { 00:20:09.212 "name": "spare", 00:20:09.212 "uuid": "0d9a47ee-9c50-5305-89b7-6bd30bd5ab53", 00:20:09.212 "is_configured": true, 00:20:09.212 "data_offset": 256, 00:20:09.212 "data_size": 7936 00:20:09.212 }, 00:20:09.212 { 00:20:09.212 "name": "BaseBdev2", 00:20:09.212 "uuid": "ca0cbed2-4ee3-5537-9553-af15fc32042b", 00:20:09.212 "is_configured": true, 00:20:09.212 "data_offset": 256, 00:20:09.212 "data_size": 7936 00:20:09.212 } 00:20:09.212 ] 00:20:09.212 }' 00:20:09.212 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:09.212 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:09.212 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:09.470 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:09.470 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:09.470 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.470 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:09.470 [2024-11-05 16:34:22.334059] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:09.470 [2024-11-05 16:34:22.379098] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:09.470 [2024-11-05 16:34:22.379227] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:09.470 [2024-11-05 16:34:22.379260] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:09.470 [2024-11-05 16:34:22.379297] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:09.470 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.470 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:09.470 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:09.470 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:09.470 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:09.470 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:09.470 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:09.470 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:09.470 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:09.470 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:09.470 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:09.470 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.470 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.470 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.470 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:09.470 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.470 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:09.470 "name": "raid_bdev1", 00:20:09.470 "uuid": "6180d19e-b362-481f-bd76-32ee3f12aabd", 00:20:09.470 "strip_size_kb": 0, 00:20:09.470 "state": "online", 00:20:09.470 "raid_level": "raid1", 00:20:09.470 "superblock": true, 00:20:09.470 "num_base_bdevs": 2, 00:20:09.470 "num_base_bdevs_discovered": 1, 00:20:09.470 "num_base_bdevs_operational": 1, 00:20:09.470 "base_bdevs_list": [ 00:20:09.470 { 00:20:09.470 "name": null, 00:20:09.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.470 "is_configured": false, 00:20:09.470 "data_offset": 0, 00:20:09.470 "data_size": 7936 00:20:09.470 }, 00:20:09.470 { 00:20:09.470 "name": "BaseBdev2", 00:20:09.470 "uuid": "ca0cbed2-4ee3-5537-9553-af15fc32042b", 00:20:09.470 "is_configured": true, 00:20:09.470 "data_offset": 256, 00:20:09.470 "data_size": 7936 00:20:09.470 } 00:20:09.470 ] 00:20:09.470 }' 00:20:09.470 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:09.470 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:10.038 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:10.038 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.038 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:10.038 [2024-11-05 16:34:22.882497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:10.038 [2024-11-05 16:34:22.882630] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:10.038 [2024-11-05 16:34:22.882674] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:10.038 [2024-11-05 16:34:22.882705] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:10.038 [2024-11-05 16:34:22.882980] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:10.038 [2024-11-05 16:34:22.883033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:10.038 [2024-11-05 16:34:22.883118] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:10.038 [2024-11-05 16:34:22.883158] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:10.038 [2024-11-05 16:34:22.883197] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:10.038 [2024-11-05 16:34:22.883280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:10.038 [2024-11-05 16:34:22.897368] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:20:10.038 spare 00:20:10.038 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.038 16:34:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:10.038 [2024-11-05 16:34:22.899110] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:10.985 16:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:10.985 16:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:10.985 16:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:10.985 16:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:10.985 16:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:10.985 16:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.985 16:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.985 16:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.985 16:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:10.985 16:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.985 16:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:10.985 "name": "raid_bdev1", 00:20:10.985 "uuid": "6180d19e-b362-481f-bd76-32ee3f12aabd", 00:20:10.985 "strip_size_kb": 0, 00:20:10.985 "state": "online", 00:20:10.985 "raid_level": "raid1", 00:20:10.985 "superblock": true, 00:20:10.985 "num_base_bdevs": 2, 00:20:10.985 "num_base_bdevs_discovered": 2, 00:20:10.985 "num_base_bdevs_operational": 2, 00:20:10.985 "process": { 00:20:10.985 "type": "rebuild", 00:20:10.985 "target": "spare", 00:20:10.985 "progress": { 00:20:10.985 "blocks": 2560, 00:20:10.985 "percent": 32 00:20:10.985 } 00:20:10.985 }, 00:20:10.985 "base_bdevs_list": [ 00:20:10.985 { 00:20:10.985 "name": "spare", 00:20:10.985 "uuid": "0d9a47ee-9c50-5305-89b7-6bd30bd5ab53", 00:20:10.985 "is_configured": true, 00:20:10.985 "data_offset": 256, 00:20:10.985 "data_size": 7936 00:20:10.985 }, 00:20:10.985 { 00:20:10.985 "name": "BaseBdev2", 00:20:10.985 "uuid": "ca0cbed2-4ee3-5537-9553-af15fc32042b", 00:20:10.985 "is_configured": true, 00:20:10.985 "data_offset": 256, 00:20:10.985 "data_size": 7936 00:20:10.985 } 00:20:10.985 ] 00:20:10.985 }' 00:20:10.985 16:34:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:10.985 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:10.985 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:10.985 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:10.985 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:10.985 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.985 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:10.985 [2024-11-05 16:34:24.059216] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:11.244 [2024-11-05 16:34:24.104220] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:11.244 [2024-11-05 16:34:24.104292] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:11.244 [2024-11-05 16:34:24.104310] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:11.244 [2024-11-05 16:34:24.104318] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:11.244 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.244 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:11.244 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:11.244 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:11.244 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:11.244 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:11.244 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:11.244 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:11.244 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:11.244 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:11.244 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:11.244 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.244 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.244 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.244 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:11.244 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.244 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.244 "name": "raid_bdev1", 00:20:11.244 "uuid": "6180d19e-b362-481f-bd76-32ee3f12aabd", 00:20:11.244 "strip_size_kb": 0, 00:20:11.244 "state": "online", 00:20:11.244 "raid_level": "raid1", 00:20:11.244 "superblock": true, 00:20:11.244 "num_base_bdevs": 2, 00:20:11.244 "num_base_bdevs_discovered": 1, 00:20:11.244 "num_base_bdevs_operational": 1, 00:20:11.244 "base_bdevs_list": [ 00:20:11.244 { 00:20:11.244 "name": null, 00:20:11.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.244 "is_configured": false, 00:20:11.244 "data_offset": 0, 00:20:11.244 "data_size": 7936 00:20:11.244 }, 00:20:11.244 { 00:20:11.244 "name": "BaseBdev2", 00:20:11.244 "uuid": "ca0cbed2-4ee3-5537-9553-af15fc32042b", 00:20:11.244 "is_configured": true, 00:20:11.244 "data_offset": 256, 00:20:11.244 "data_size": 7936 00:20:11.244 } 00:20:11.244 ] 00:20:11.244 }' 00:20:11.244 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.244 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:11.504 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:11.504 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:11.504 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:11.504 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:11.504 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:11.504 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.504 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.504 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.504 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:11.504 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.763 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:11.763 "name": "raid_bdev1", 00:20:11.763 "uuid": "6180d19e-b362-481f-bd76-32ee3f12aabd", 00:20:11.763 "strip_size_kb": 0, 00:20:11.763 "state": "online", 00:20:11.763 "raid_level": "raid1", 00:20:11.763 "superblock": true, 00:20:11.763 "num_base_bdevs": 2, 00:20:11.763 "num_base_bdevs_discovered": 1, 00:20:11.763 "num_base_bdevs_operational": 1, 00:20:11.763 "base_bdevs_list": [ 00:20:11.763 { 00:20:11.763 "name": null, 00:20:11.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.763 "is_configured": false, 00:20:11.763 "data_offset": 0, 00:20:11.763 "data_size": 7936 00:20:11.763 }, 00:20:11.763 { 00:20:11.763 "name": "BaseBdev2", 00:20:11.763 "uuid": "ca0cbed2-4ee3-5537-9553-af15fc32042b", 00:20:11.763 "is_configured": true, 00:20:11.764 "data_offset": 256, 00:20:11.764 "data_size": 7936 00:20:11.764 } 00:20:11.764 ] 00:20:11.764 }' 00:20:11.764 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:11.764 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:11.764 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:11.764 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:11.764 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:11.764 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.764 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:11.764 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.764 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:11.764 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.764 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:11.764 [2024-11-05 16:34:24.683618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:11.764 [2024-11-05 16:34:24.683730] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:11.764 [2024-11-05 16:34:24.683759] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:11.764 [2024-11-05 16:34:24.683767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:11.764 [2024-11-05 16:34:24.683989] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:11.764 [2024-11-05 16:34:24.684001] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:11.764 [2024-11-05 16:34:24.684062] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:11.764 [2024-11-05 16:34:24.684078] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:11.764 [2024-11-05 16:34:24.684088] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:11.764 [2024-11-05 16:34:24.684099] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:11.764 BaseBdev1 00:20:11.764 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.764 16:34:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:12.702 16:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:12.702 16:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:12.702 16:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:12.702 16:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:12.702 16:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:12.702 16:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:12.702 16:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:12.702 16:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:12.702 16:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:12.702 16:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:12.702 16:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.702 16:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.702 16:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.702 16:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:12.702 16:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.702 16:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:12.702 "name": "raid_bdev1", 00:20:12.702 "uuid": "6180d19e-b362-481f-bd76-32ee3f12aabd", 00:20:12.702 "strip_size_kb": 0, 00:20:12.702 "state": "online", 00:20:12.702 "raid_level": "raid1", 00:20:12.702 "superblock": true, 00:20:12.702 "num_base_bdevs": 2, 00:20:12.702 "num_base_bdevs_discovered": 1, 00:20:12.702 "num_base_bdevs_operational": 1, 00:20:12.702 "base_bdevs_list": [ 00:20:12.702 { 00:20:12.702 "name": null, 00:20:12.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.703 "is_configured": false, 00:20:12.703 "data_offset": 0, 00:20:12.703 "data_size": 7936 00:20:12.703 }, 00:20:12.703 { 00:20:12.703 "name": "BaseBdev2", 00:20:12.703 "uuid": "ca0cbed2-4ee3-5537-9553-af15fc32042b", 00:20:12.703 "is_configured": true, 00:20:12.703 "data_offset": 256, 00:20:12.703 "data_size": 7936 00:20:12.703 } 00:20:12.703 ] 00:20:12.703 }' 00:20:12.703 16:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:12.703 16:34:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:13.273 16:34:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:13.273 16:34:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:13.273 16:34:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:13.273 16:34:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:13.273 16:34:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:13.273 16:34:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.273 16:34:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.273 16:34:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.273 16:34:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:13.273 16:34:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.273 16:34:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:13.273 "name": "raid_bdev1", 00:20:13.273 "uuid": "6180d19e-b362-481f-bd76-32ee3f12aabd", 00:20:13.273 "strip_size_kb": 0, 00:20:13.273 "state": "online", 00:20:13.273 "raid_level": "raid1", 00:20:13.273 "superblock": true, 00:20:13.273 "num_base_bdevs": 2, 00:20:13.273 "num_base_bdevs_discovered": 1, 00:20:13.273 "num_base_bdevs_operational": 1, 00:20:13.273 "base_bdevs_list": [ 00:20:13.273 { 00:20:13.273 "name": null, 00:20:13.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.273 "is_configured": false, 00:20:13.273 "data_offset": 0, 00:20:13.273 "data_size": 7936 00:20:13.273 }, 00:20:13.273 { 00:20:13.273 "name": "BaseBdev2", 00:20:13.273 "uuid": "ca0cbed2-4ee3-5537-9553-af15fc32042b", 00:20:13.273 "is_configured": true, 00:20:13.273 "data_offset": 256, 00:20:13.273 "data_size": 7936 00:20:13.273 } 00:20:13.273 ] 00:20:13.273 }' 00:20:13.273 16:34:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:13.273 16:34:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:13.273 16:34:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:13.273 16:34:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:13.273 16:34:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:13.273 16:34:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:20:13.273 16:34:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:13.273 16:34:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:13.273 16:34:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:13.273 16:34:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:13.273 16:34:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:13.273 16:34:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:13.273 16:34:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.273 16:34:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:13.273 [2024-11-05 16:34:26.264999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:13.273 [2024-11-05 16:34:26.265166] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:13.273 [2024-11-05 16:34:26.265183] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:13.273 request: 00:20:13.273 { 00:20:13.273 "base_bdev": "BaseBdev1", 00:20:13.273 "raid_bdev": "raid_bdev1", 00:20:13.273 "method": "bdev_raid_add_base_bdev", 00:20:13.273 "req_id": 1 00:20:13.273 } 00:20:13.273 Got JSON-RPC error response 00:20:13.273 response: 00:20:13.273 { 00:20:13.273 "code": -22, 00:20:13.273 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:13.273 } 00:20:13.273 16:34:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:13.273 16:34:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:20:13.273 16:34:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:13.273 16:34:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:13.273 16:34:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:13.273 16:34:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:14.212 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:14.212 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:14.212 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:14.212 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:14.212 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:14.212 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:14.212 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:14.212 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:14.212 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:14.212 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:14.212 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.212 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.212 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.212 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:14.471 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.471 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:14.471 "name": "raid_bdev1", 00:20:14.471 "uuid": "6180d19e-b362-481f-bd76-32ee3f12aabd", 00:20:14.471 "strip_size_kb": 0, 00:20:14.471 "state": "online", 00:20:14.471 "raid_level": "raid1", 00:20:14.471 "superblock": true, 00:20:14.471 "num_base_bdevs": 2, 00:20:14.471 "num_base_bdevs_discovered": 1, 00:20:14.471 "num_base_bdevs_operational": 1, 00:20:14.471 "base_bdevs_list": [ 00:20:14.471 { 00:20:14.471 "name": null, 00:20:14.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.471 "is_configured": false, 00:20:14.471 "data_offset": 0, 00:20:14.471 "data_size": 7936 00:20:14.471 }, 00:20:14.471 { 00:20:14.471 "name": "BaseBdev2", 00:20:14.471 "uuid": "ca0cbed2-4ee3-5537-9553-af15fc32042b", 00:20:14.471 "is_configured": true, 00:20:14.471 "data_offset": 256, 00:20:14.471 "data_size": 7936 00:20:14.471 } 00:20:14.471 ] 00:20:14.471 }' 00:20:14.471 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:14.471 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:14.731 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:14.731 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:14.731 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:14.731 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:14.731 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:14.731 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.731 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.731 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:14.731 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.731 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.731 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:14.731 "name": "raid_bdev1", 00:20:14.731 "uuid": "6180d19e-b362-481f-bd76-32ee3f12aabd", 00:20:14.731 "strip_size_kb": 0, 00:20:14.731 "state": "online", 00:20:14.731 "raid_level": "raid1", 00:20:14.731 "superblock": true, 00:20:14.731 "num_base_bdevs": 2, 00:20:14.731 "num_base_bdevs_discovered": 1, 00:20:14.731 "num_base_bdevs_operational": 1, 00:20:14.731 "base_bdevs_list": [ 00:20:14.731 { 00:20:14.731 "name": null, 00:20:14.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.731 "is_configured": false, 00:20:14.731 "data_offset": 0, 00:20:14.731 "data_size": 7936 00:20:14.731 }, 00:20:14.731 { 00:20:14.731 "name": "BaseBdev2", 00:20:14.731 "uuid": "ca0cbed2-4ee3-5537-9553-af15fc32042b", 00:20:14.731 "is_configured": true, 00:20:14.731 "data_offset": 256, 00:20:14.731 "data_size": 7936 00:20:14.731 } 00:20:14.731 ] 00:20:14.731 }' 00:20:14.731 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:14.731 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:14.731 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:14.990 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:14.990 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88138 00:20:14.990 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # '[' -z 88138 ']' 00:20:14.990 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # kill -0 88138 00:20:14.990 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # uname 00:20:14.990 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:14.990 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88138 00:20:14.990 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:14.990 killing process with pid 88138 00:20:14.990 Received shutdown signal, test time was about 60.000000 seconds 00:20:14.990 00:20:14.990 Latency(us) 00:20:14.990 [2024-11-05T16:34:28.078Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.990 [2024-11-05T16:34:28.078Z] =================================================================================================================== 00:20:14.990 [2024-11-05T16:34:28.078Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:14.990 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:14.990 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88138' 00:20:14.990 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@971 -- # kill 88138 00:20:14.990 [2024-11-05 16:34:27.883441] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:14.990 [2024-11-05 16:34:27.883587] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:14.990 16:34:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@976 -- # wait 88138 00:20:14.990 [2024-11-05 16:34:27.883639] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:14.990 [2024-11-05 16:34:27.883651] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:15.249 [2024-11-05 16:34:28.203308] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:16.632 16:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:20:16.632 00:20:16.632 real 0m19.802s 00:20:16.632 user 0m25.728s 00:20:16.632 sys 0m2.747s 00:20:16.632 ************************************ 00:20:16.632 END TEST raid_rebuild_test_sb_md_separate 00:20:16.632 ************************************ 00:20:16.632 16:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:16.632 16:34:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:16.632 16:34:29 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:20:16.632 16:34:29 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:20:16.632 16:34:29 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:20:16.632 16:34:29 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:16.632 16:34:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:16.632 ************************************ 00:20:16.632 START TEST raid_state_function_test_sb_md_interleaved 00:20:16.632 ************************************ 00:20:16.632 16:34:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:20:16.632 16:34:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:20:16.632 16:34:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:20:16.632 16:34:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:16.632 16:34:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:16.632 16:34:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:16.632 16:34:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:16.632 16:34:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:16.632 16:34:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:16.632 16:34:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:16.632 16:34:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:16.632 16:34:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:16.632 16:34:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:16.632 16:34:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:16.632 16:34:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:16.632 16:34:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:16.632 16:34:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:16.632 16:34:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:16.632 16:34:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:16.632 16:34:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:20:16.632 16:34:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:20:16.632 16:34:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:16.632 16:34:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:16.632 16:34:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88827 00:20:16.632 16:34:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:16.632 Process raid pid: 88827 00:20:16.632 16:34:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88827' 00:20:16.632 16:34:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88827 00:20:16.632 16:34:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 88827 ']' 00:20:16.632 16:34:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.632 16:34:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:16.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.632 16:34:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.632 16:34:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:16.632 16:34:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.632 [2024-11-05 16:34:29.446111] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:20:16.632 [2024-11-05 16:34:29.446226] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.632 [2024-11-05 16:34:29.622491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.892 [2024-11-05 16:34:29.737498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.892 [2024-11-05 16:34:29.945892] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:16.892 [2024-11-05 16:34:29.945928] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:17.460 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:17.460 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:20:17.461 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:17.461 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.461 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.461 [2024-11-05 16:34:30.279567] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:17.461 [2024-11-05 16:34:30.279621] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:17.461 [2024-11-05 16:34:30.279631] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:17.461 [2024-11-05 16:34:30.279641] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:17.461 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.461 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:17.461 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:17.461 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:17.461 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:17.461 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:17.461 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:17.461 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.461 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.461 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.461 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.461 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.461 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.461 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.461 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:17.461 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.461 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.461 "name": "Existed_Raid", 00:20:17.461 "uuid": "7c3ce377-d183-4112-94f3-4ed3a928f74d", 00:20:17.461 "strip_size_kb": 0, 00:20:17.461 "state": "configuring", 00:20:17.461 "raid_level": "raid1", 00:20:17.461 "superblock": true, 00:20:17.461 "num_base_bdevs": 2, 00:20:17.461 "num_base_bdevs_discovered": 0, 00:20:17.461 "num_base_bdevs_operational": 2, 00:20:17.461 "base_bdevs_list": [ 00:20:17.461 { 00:20:17.461 "name": "BaseBdev1", 00:20:17.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.461 "is_configured": false, 00:20:17.461 "data_offset": 0, 00:20:17.461 "data_size": 0 00:20:17.461 }, 00:20:17.461 { 00:20:17.461 "name": "BaseBdev2", 00:20:17.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.461 "is_configured": false, 00:20:17.461 "data_offset": 0, 00:20:17.461 "data_size": 0 00:20:17.461 } 00:20:17.461 ] 00:20:17.461 }' 00:20:17.461 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.461 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.720 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:17.720 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.720 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.720 [2024-11-05 16:34:30.686807] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:17.720 [2024-11-05 16:34:30.686846] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:17.720 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.720 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:17.720 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.720 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.720 [2024-11-05 16:34:30.694770] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:17.720 [2024-11-05 16:34:30.694812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:17.720 [2024-11-05 16:34:30.694821] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:17.720 [2024-11-05 16:34:30.694832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:17.720 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.720 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:20:17.720 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.720 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.720 [2024-11-05 16:34:30.737426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:17.720 BaseBdev1 00:20:17.720 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.720 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:17.720 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:20:17.720 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:17.720 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local i 00:20:17.720 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:17.720 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:17.720 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:17.720 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.720 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.720 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.720 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:17.720 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.720 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.720 [ 00:20:17.720 { 00:20:17.720 "name": "BaseBdev1", 00:20:17.720 "aliases": [ 00:20:17.720 "dacd1f09-9406-4366-81ff-f377b1c0b7f9" 00:20:17.720 ], 00:20:17.720 "product_name": "Malloc disk", 00:20:17.720 "block_size": 4128, 00:20:17.720 "num_blocks": 8192, 00:20:17.720 "uuid": "dacd1f09-9406-4366-81ff-f377b1c0b7f9", 00:20:17.720 "md_size": 32, 00:20:17.720 "md_interleave": true, 00:20:17.720 "dif_type": 0, 00:20:17.720 "assigned_rate_limits": { 00:20:17.720 "rw_ios_per_sec": 0, 00:20:17.720 "rw_mbytes_per_sec": 0, 00:20:17.720 "r_mbytes_per_sec": 0, 00:20:17.720 "w_mbytes_per_sec": 0 00:20:17.720 }, 00:20:17.720 "claimed": true, 00:20:17.720 "claim_type": "exclusive_write", 00:20:17.720 "zoned": false, 00:20:17.720 "supported_io_types": { 00:20:17.720 "read": true, 00:20:17.720 "write": true, 00:20:17.720 "unmap": true, 00:20:17.720 "flush": true, 00:20:17.720 "reset": true, 00:20:17.721 "nvme_admin": false, 00:20:17.721 "nvme_io": false, 00:20:17.721 "nvme_io_md": false, 00:20:17.721 "write_zeroes": true, 00:20:17.721 "zcopy": true, 00:20:17.721 "get_zone_info": false, 00:20:17.721 "zone_management": false, 00:20:17.721 "zone_append": false, 00:20:17.721 "compare": false, 00:20:17.721 "compare_and_write": false, 00:20:17.721 "abort": true, 00:20:17.721 "seek_hole": false, 00:20:17.721 "seek_data": false, 00:20:17.721 "copy": true, 00:20:17.721 "nvme_iov_md": false 00:20:17.721 }, 00:20:17.721 "memory_domains": [ 00:20:17.721 { 00:20:17.721 "dma_device_id": "system", 00:20:17.721 "dma_device_type": 1 00:20:17.721 }, 00:20:17.721 { 00:20:17.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:17.721 "dma_device_type": 2 00:20:17.721 } 00:20:17.721 ], 00:20:17.721 "driver_specific": {} 00:20:17.721 } 00:20:17.721 ] 00:20:17.721 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.721 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # return 0 00:20:17.721 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:17.721 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:17.721 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:17.721 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:17.721 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:17.721 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:17.721 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.721 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.721 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.721 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.721 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.721 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.721 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.721 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:17.721 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.980 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.980 "name": "Existed_Raid", 00:20:17.980 "uuid": "43967394-5e66-4493-be5f-abeeadb4d797", 00:20:17.980 "strip_size_kb": 0, 00:20:17.980 "state": "configuring", 00:20:17.980 "raid_level": "raid1", 00:20:17.980 "superblock": true, 00:20:17.980 "num_base_bdevs": 2, 00:20:17.980 "num_base_bdevs_discovered": 1, 00:20:17.980 "num_base_bdevs_operational": 2, 00:20:17.980 "base_bdevs_list": [ 00:20:17.980 { 00:20:17.980 "name": "BaseBdev1", 00:20:17.980 "uuid": "dacd1f09-9406-4366-81ff-f377b1c0b7f9", 00:20:17.980 "is_configured": true, 00:20:17.980 "data_offset": 256, 00:20:17.980 "data_size": 7936 00:20:17.980 }, 00:20:17.980 { 00:20:17.980 "name": "BaseBdev2", 00:20:17.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.980 "is_configured": false, 00:20:17.980 "data_offset": 0, 00:20:17.980 "data_size": 0 00:20:17.980 } 00:20:17.980 ] 00:20:17.980 }' 00:20:17.980 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.980 16:34:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.239 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:18.239 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.239 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.239 [2024-11-05 16:34:31.208699] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:18.239 [2024-11-05 16:34:31.208756] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:18.239 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.239 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:18.239 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.239 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.239 [2024-11-05 16:34:31.216734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:18.239 [2024-11-05 16:34:31.218517] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:18.239 [2024-11-05 16:34:31.218570] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:18.239 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.239 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:18.239 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:18.239 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:18.239 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:18.239 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:18.239 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:18.239 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:18.239 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:18.239 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:18.239 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:18.239 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:18.239 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:18.240 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.240 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:18.240 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.240 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.240 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.240 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:18.240 "name": "Existed_Raid", 00:20:18.240 "uuid": "548adb5b-4e02-4ac0-acd2-2a0d03096bdd", 00:20:18.240 "strip_size_kb": 0, 00:20:18.240 "state": "configuring", 00:20:18.240 "raid_level": "raid1", 00:20:18.240 "superblock": true, 00:20:18.240 "num_base_bdevs": 2, 00:20:18.240 "num_base_bdevs_discovered": 1, 00:20:18.240 "num_base_bdevs_operational": 2, 00:20:18.240 "base_bdevs_list": [ 00:20:18.240 { 00:20:18.240 "name": "BaseBdev1", 00:20:18.240 "uuid": "dacd1f09-9406-4366-81ff-f377b1c0b7f9", 00:20:18.240 "is_configured": true, 00:20:18.240 "data_offset": 256, 00:20:18.240 "data_size": 7936 00:20:18.240 }, 00:20:18.240 { 00:20:18.240 "name": "BaseBdev2", 00:20:18.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.240 "is_configured": false, 00:20:18.240 "data_offset": 0, 00:20:18.240 "data_size": 0 00:20:18.240 } 00:20:18.240 ] 00:20:18.240 }' 00:20:18.240 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:18.240 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.809 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:20:18.809 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.809 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.809 [2024-11-05 16:34:31.681953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:18.809 [2024-11-05 16:34:31.682196] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:18.809 [2024-11-05 16:34:31.682209] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:18.809 [2024-11-05 16:34:31.682296] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:18.809 [2024-11-05 16:34:31.682369] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:18.809 [2024-11-05 16:34:31.682395] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:18.809 [2024-11-05 16:34:31.682458] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:18.809 BaseBdev2 00:20:18.809 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.809 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:18.809 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:20:18.809 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:18.809 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local i 00:20:18.809 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:18.809 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:18.809 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:18.809 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.809 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.809 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.809 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:18.809 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.809 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.809 [ 00:20:18.809 { 00:20:18.809 "name": "BaseBdev2", 00:20:18.809 "aliases": [ 00:20:18.809 "1dc95fa4-f8c3-4ae2-a98b-7d03abffc912" 00:20:18.809 ], 00:20:18.809 "product_name": "Malloc disk", 00:20:18.809 "block_size": 4128, 00:20:18.809 "num_blocks": 8192, 00:20:18.809 "uuid": "1dc95fa4-f8c3-4ae2-a98b-7d03abffc912", 00:20:18.809 "md_size": 32, 00:20:18.809 "md_interleave": true, 00:20:18.809 "dif_type": 0, 00:20:18.809 "assigned_rate_limits": { 00:20:18.809 "rw_ios_per_sec": 0, 00:20:18.809 "rw_mbytes_per_sec": 0, 00:20:18.809 "r_mbytes_per_sec": 0, 00:20:18.809 "w_mbytes_per_sec": 0 00:20:18.810 }, 00:20:18.810 "claimed": true, 00:20:18.810 "claim_type": "exclusive_write", 00:20:18.810 "zoned": false, 00:20:18.810 "supported_io_types": { 00:20:18.810 "read": true, 00:20:18.810 "write": true, 00:20:18.810 "unmap": true, 00:20:18.810 "flush": true, 00:20:18.810 "reset": true, 00:20:18.810 "nvme_admin": false, 00:20:18.810 "nvme_io": false, 00:20:18.810 "nvme_io_md": false, 00:20:18.810 "write_zeroes": true, 00:20:18.810 "zcopy": true, 00:20:18.810 "get_zone_info": false, 00:20:18.810 "zone_management": false, 00:20:18.810 "zone_append": false, 00:20:18.810 "compare": false, 00:20:18.810 "compare_and_write": false, 00:20:18.810 "abort": true, 00:20:18.810 "seek_hole": false, 00:20:18.810 "seek_data": false, 00:20:18.810 "copy": true, 00:20:18.810 "nvme_iov_md": false 00:20:18.810 }, 00:20:18.810 "memory_domains": [ 00:20:18.810 { 00:20:18.810 "dma_device_id": "system", 00:20:18.810 "dma_device_type": 1 00:20:18.810 }, 00:20:18.810 { 00:20:18.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:18.810 "dma_device_type": 2 00:20:18.810 } 00:20:18.810 ], 00:20:18.810 "driver_specific": {} 00:20:18.810 } 00:20:18.810 ] 00:20:18.810 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.810 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # return 0 00:20:18.810 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:18.810 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:18.810 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:18.810 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:18.810 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:18.810 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:18.810 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:18.810 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:18.810 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:18.810 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:18.810 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:18.810 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:18.810 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.810 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:18.810 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.810 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.810 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.810 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:18.810 "name": "Existed_Raid", 00:20:18.810 "uuid": "548adb5b-4e02-4ac0-acd2-2a0d03096bdd", 00:20:18.810 "strip_size_kb": 0, 00:20:18.810 "state": "online", 00:20:18.810 "raid_level": "raid1", 00:20:18.810 "superblock": true, 00:20:18.810 "num_base_bdevs": 2, 00:20:18.810 "num_base_bdevs_discovered": 2, 00:20:18.810 "num_base_bdevs_operational": 2, 00:20:18.810 "base_bdevs_list": [ 00:20:18.810 { 00:20:18.810 "name": "BaseBdev1", 00:20:18.810 "uuid": "dacd1f09-9406-4366-81ff-f377b1c0b7f9", 00:20:18.810 "is_configured": true, 00:20:18.810 "data_offset": 256, 00:20:18.810 "data_size": 7936 00:20:18.810 }, 00:20:18.810 { 00:20:18.810 "name": "BaseBdev2", 00:20:18.810 "uuid": "1dc95fa4-f8c3-4ae2-a98b-7d03abffc912", 00:20:18.810 "is_configured": true, 00:20:18.810 "data_offset": 256, 00:20:18.810 "data_size": 7936 00:20:18.810 } 00:20:18.810 ] 00:20:18.810 }' 00:20:18.810 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:18.810 16:34:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.379 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:19.379 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:19.379 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:19.379 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:19.379 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:20:19.379 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:19.379 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:19.379 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.379 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.379 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:19.379 [2024-11-05 16:34:32.177485] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:19.379 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.379 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:19.379 "name": "Existed_Raid", 00:20:19.379 "aliases": [ 00:20:19.379 "548adb5b-4e02-4ac0-acd2-2a0d03096bdd" 00:20:19.379 ], 00:20:19.379 "product_name": "Raid Volume", 00:20:19.379 "block_size": 4128, 00:20:19.379 "num_blocks": 7936, 00:20:19.379 "uuid": "548adb5b-4e02-4ac0-acd2-2a0d03096bdd", 00:20:19.379 "md_size": 32, 00:20:19.379 "md_interleave": true, 00:20:19.379 "dif_type": 0, 00:20:19.379 "assigned_rate_limits": { 00:20:19.379 "rw_ios_per_sec": 0, 00:20:19.379 "rw_mbytes_per_sec": 0, 00:20:19.379 "r_mbytes_per_sec": 0, 00:20:19.379 "w_mbytes_per_sec": 0 00:20:19.379 }, 00:20:19.379 "claimed": false, 00:20:19.379 "zoned": false, 00:20:19.379 "supported_io_types": { 00:20:19.379 "read": true, 00:20:19.379 "write": true, 00:20:19.379 "unmap": false, 00:20:19.379 "flush": false, 00:20:19.379 "reset": true, 00:20:19.379 "nvme_admin": false, 00:20:19.379 "nvme_io": false, 00:20:19.379 "nvme_io_md": false, 00:20:19.379 "write_zeroes": true, 00:20:19.379 "zcopy": false, 00:20:19.379 "get_zone_info": false, 00:20:19.379 "zone_management": false, 00:20:19.379 "zone_append": false, 00:20:19.379 "compare": false, 00:20:19.379 "compare_and_write": false, 00:20:19.379 "abort": false, 00:20:19.379 "seek_hole": false, 00:20:19.379 "seek_data": false, 00:20:19.379 "copy": false, 00:20:19.379 "nvme_iov_md": false 00:20:19.379 }, 00:20:19.379 "memory_domains": [ 00:20:19.379 { 00:20:19.379 "dma_device_id": "system", 00:20:19.379 "dma_device_type": 1 00:20:19.379 }, 00:20:19.379 { 00:20:19.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:19.379 "dma_device_type": 2 00:20:19.379 }, 00:20:19.379 { 00:20:19.379 "dma_device_id": "system", 00:20:19.379 "dma_device_type": 1 00:20:19.379 }, 00:20:19.379 { 00:20:19.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:19.379 "dma_device_type": 2 00:20:19.379 } 00:20:19.379 ], 00:20:19.379 "driver_specific": { 00:20:19.379 "raid": { 00:20:19.379 "uuid": "548adb5b-4e02-4ac0-acd2-2a0d03096bdd", 00:20:19.379 "strip_size_kb": 0, 00:20:19.379 "state": "online", 00:20:19.379 "raid_level": "raid1", 00:20:19.379 "superblock": true, 00:20:19.379 "num_base_bdevs": 2, 00:20:19.379 "num_base_bdevs_discovered": 2, 00:20:19.379 "num_base_bdevs_operational": 2, 00:20:19.379 "base_bdevs_list": [ 00:20:19.379 { 00:20:19.379 "name": "BaseBdev1", 00:20:19.379 "uuid": "dacd1f09-9406-4366-81ff-f377b1c0b7f9", 00:20:19.379 "is_configured": true, 00:20:19.379 "data_offset": 256, 00:20:19.379 "data_size": 7936 00:20:19.379 }, 00:20:19.379 { 00:20:19.379 "name": "BaseBdev2", 00:20:19.379 "uuid": "1dc95fa4-f8c3-4ae2-a98b-7d03abffc912", 00:20:19.379 "is_configured": true, 00:20:19.379 "data_offset": 256, 00:20:19.379 "data_size": 7936 00:20:19.379 } 00:20:19.379 ] 00:20:19.379 } 00:20:19.379 } 00:20:19.379 }' 00:20:19.379 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:19.379 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:19.379 BaseBdev2' 00:20:19.379 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:19.380 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:20:19.380 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:19.380 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:19.380 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:19.380 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.380 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.380 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.380 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:19.380 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:19.380 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:19.380 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:19.380 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.380 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.380 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:19.380 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.380 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:19.380 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:19.380 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:19.380 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.380 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.380 [2024-11-05 16:34:32.404826] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:19.639 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.639 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:19.639 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:20:19.639 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:19.639 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:20:19.639 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:19.639 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:20:19.640 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:19.640 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:19.640 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:19.640 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:19.640 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:19.640 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:19.640 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:19.640 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:19.640 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:19.640 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.640 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.640 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:19.640 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.640 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.640 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:19.640 "name": "Existed_Raid", 00:20:19.640 "uuid": "548adb5b-4e02-4ac0-acd2-2a0d03096bdd", 00:20:19.640 "strip_size_kb": 0, 00:20:19.640 "state": "online", 00:20:19.640 "raid_level": "raid1", 00:20:19.640 "superblock": true, 00:20:19.640 "num_base_bdevs": 2, 00:20:19.640 "num_base_bdevs_discovered": 1, 00:20:19.640 "num_base_bdevs_operational": 1, 00:20:19.640 "base_bdevs_list": [ 00:20:19.640 { 00:20:19.640 "name": null, 00:20:19.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:19.640 "is_configured": false, 00:20:19.640 "data_offset": 0, 00:20:19.640 "data_size": 7936 00:20:19.640 }, 00:20:19.640 { 00:20:19.640 "name": "BaseBdev2", 00:20:19.640 "uuid": "1dc95fa4-f8c3-4ae2-a98b-7d03abffc912", 00:20:19.640 "is_configured": true, 00:20:19.640 "data_offset": 256, 00:20:19.640 "data_size": 7936 00:20:19.640 } 00:20:19.640 ] 00:20:19.640 }' 00:20:19.640 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:19.640 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.899 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:19.900 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:19.900 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.900 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:19.900 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.900 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.900 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.900 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:19.900 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:19.900 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:19.900 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.900 16:34:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.900 [2024-11-05 16:34:32.970825] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:19.900 [2024-11-05 16:34:32.970944] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:20.160 [2024-11-05 16:34:33.070484] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:20.160 [2024-11-05 16:34:33.070559] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:20.160 [2024-11-05 16:34:33.070572] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:20.160 16:34:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.160 16:34:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:20.160 16:34:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:20.160 16:34:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.160 16:34:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:20.160 16:34:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.160 16:34:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:20.160 16:34:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.160 16:34:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:20.160 16:34:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:20.160 16:34:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:20:20.160 16:34:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88827 00:20:20.160 16:34:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 88827 ']' 00:20:20.160 16:34:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 88827 00:20:20.160 16:34:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:20:20.160 16:34:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:20.160 16:34:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88827 00:20:20.160 16:34:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:20.160 16:34:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:20.160 killing process with pid 88827 00:20:20.160 16:34:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88827' 00:20:20.160 16:34:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # kill 88827 00:20:20.160 [2024-11-05 16:34:33.166170] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:20.160 16:34:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@976 -- # wait 88827 00:20:20.160 [2024-11-05 16:34:33.182633] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:21.596 16:34:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:20:21.596 00:20:21.596 real 0m4.944s 00:20:21.596 user 0m7.094s 00:20:21.596 sys 0m0.845s 00:20:21.596 16:34:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:21.596 16:34:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.596 ************************************ 00:20:21.596 END TEST raid_state_function_test_sb_md_interleaved 00:20:21.596 ************************************ 00:20:21.596 16:34:34 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:20:21.596 16:34:34 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:20:21.596 16:34:34 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:21.596 16:34:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:21.596 ************************************ 00:20:21.596 START TEST raid_superblock_test_md_interleaved 00:20:21.596 ************************************ 00:20:21.596 16:34:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:20:21.596 16:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:20:21.596 16:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:20:21.596 16:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:21.596 16:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:21.596 16:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:21.596 16:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:21.596 16:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:21.596 16:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:21.596 16:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:21.596 16:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:21.596 16:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:21.596 16:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:21.596 16:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:21.596 16:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:20:21.596 16:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:20:21.596 16:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89075 00:20:21.596 16:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:21.596 16:34:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89075 00:20:21.596 16:34:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 89075 ']' 00:20:21.596 16:34:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.596 16:34:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:21.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.597 16:34:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.597 16:34:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:21.597 16:34:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.597 [2024-11-05 16:34:34.451147] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:20:21.597 [2024-11-05 16:34:34.451262] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89075 ] 00:20:21.597 [2024-11-05 16:34:34.622492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.856 [2024-11-05 16:34:34.733156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.856 [2024-11-05 16:34:34.926696] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:21.856 [2024-11-05 16:34:34.926755] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:22.423 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:22.423 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:20:22.423 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:22.423 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:22.423 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:22.423 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:22.423 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:22.423 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:22.423 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:22.423 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.424 malloc1 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.424 [2024-11-05 16:34:35.369415] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:22.424 [2024-11-05 16:34:35.369486] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:22.424 [2024-11-05 16:34:35.369504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:22.424 [2024-11-05 16:34:35.369513] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:22.424 [2024-11-05 16:34:35.371278] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:22.424 [2024-11-05 16:34:35.371315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:22.424 pt1 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.424 malloc2 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.424 [2024-11-05 16:34:35.426998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:22.424 [2024-11-05 16:34:35.427049] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:22.424 [2024-11-05 16:34:35.427068] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:22.424 [2024-11-05 16:34:35.427076] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:22.424 [2024-11-05 16:34:35.428818] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:22.424 [2024-11-05 16:34:35.428852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:22.424 pt2 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.424 [2024-11-05 16:34:35.439015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:22.424 [2024-11-05 16:34:35.440728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:22.424 [2024-11-05 16:34:35.440905] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:22.424 [2024-11-05 16:34:35.440917] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:22.424 [2024-11-05 16:34:35.440987] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:22.424 [2024-11-05 16:34:35.441050] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:22.424 [2024-11-05 16:34:35.441067] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:22.424 [2024-11-05 16:34:35.441135] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:22.424 "name": "raid_bdev1", 00:20:22.424 "uuid": "75739e31-b1d0-4b02-bd4c-282ff2faa25e", 00:20:22.424 "strip_size_kb": 0, 00:20:22.424 "state": "online", 00:20:22.424 "raid_level": "raid1", 00:20:22.424 "superblock": true, 00:20:22.424 "num_base_bdevs": 2, 00:20:22.424 "num_base_bdevs_discovered": 2, 00:20:22.424 "num_base_bdevs_operational": 2, 00:20:22.424 "base_bdevs_list": [ 00:20:22.424 { 00:20:22.424 "name": "pt1", 00:20:22.424 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:22.424 "is_configured": true, 00:20:22.424 "data_offset": 256, 00:20:22.424 "data_size": 7936 00:20:22.424 }, 00:20:22.424 { 00:20:22.424 "name": "pt2", 00:20:22.424 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:22.424 "is_configured": true, 00:20:22.424 "data_offset": 256, 00:20:22.424 "data_size": 7936 00:20:22.424 } 00:20:22.424 ] 00:20:22.424 }' 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:22.424 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.992 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:22.992 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:22.992 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:22.992 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:22.992 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:20:22.992 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:22.992 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:22.992 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.992 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.992 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:22.992 [2024-11-05 16:34:35.922448] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:22.992 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.992 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:22.992 "name": "raid_bdev1", 00:20:22.992 "aliases": [ 00:20:22.992 "75739e31-b1d0-4b02-bd4c-282ff2faa25e" 00:20:22.992 ], 00:20:22.992 "product_name": "Raid Volume", 00:20:22.992 "block_size": 4128, 00:20:22.992 "num_blocks": 7936, 00:20:22.992 "uuid": "75739e31-b1d0-4b02-bd4c-282ff2faa25e", 00:20:22.992 "md_size": 32, 00:20:22.992 "md_interleave": true, 00:20:22.992 "dif_type": 0, 00:20:22.992 "assigned_rate_limits": { 00:20:22.992 "rw_ios_per_sec": 0, 00:20:22.992 "rw_mbytes_per_sec": 0, 00:20:22.992 "r_mbytes_per_sec": 0, 00:20:22.992 "w_mbytes_per_sec": 0 00:20:22.992 }, 00:20:22.992 "claimed": false, 00:20:22.992 "zoned": false, 00:20:22.992 "supported_io_types": { 00:20:22.992 "read": true, 00:20:22.992 "write": true, 00:20:22.992 "unmap": false, 00:20:22.992 "flush": false, 00:20:22.992 "reset": true, 00:20:22.992 "nvme_admin": false, 00:20:22.992 "nvme_io": false, 00:20:22.992 "nvme_io_md": false, 00:20:22.992 "write_zeroes": true, 00:20:22.992 "zcopy": false, 00:20:22.992 "get_zone_info": false, 00:20:22.992 "zone_management": false, 00:20:22.992 "zone_append": false, 00:20:22.992 "compare": false, 00:20:22.992 "compare_and_write": false, 00:20:22.992 "abort": false, 00:20:22.992 "seek_hole": false, 00:20:22.992 "seek_data": false, 00:20:22.992 "copy": false, 00:20:22.992 "nvme_iov_md": false 00:20:22.992 }, 00:20:22.992 "memory_domains": [ 00:20:22.992 { 00:20:22.992 "dma_device_id": "system", 00:20:22.992 "dma_device_type": 1 00:20:22.992 }, 00:20:22.992 { 00:20:22.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:22.992 "dma_device_type": 2 00:20:22.992 }, 00:20:22.992 { 00:20:22.992 "dma_device_id": "system", 00:20:22.992 "dma_device_type": 1 00:20:22.992 }, 00:20:22.992 { 00:20:22.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:22.992 "dma_device_type": 2 00:20:22.992 } 00:20:22.992 ], 00:20:22.992 "driver_specific": { 00:20:22.992 "raid": { 00:20:22.992 "uuid": "75739e31-b1d0-4b02-bd4c-282ff2faa25e", 00:20:22.992 "strip_size_kb": 0, 00:20:22.992 "state": "online", 00:20:22.992 "raid_level": "raid1", 00:20:22.992 "superblock": true, 00:20:22.992 "num_base_bdevs": 2, 00:20:22.992 "num_base_bdevs_discovered": 2, 00:20:22.992 "num_base_bdevs_operational": 2, 00:20:22.992 "base_bdevs_list": [ 00:20:22.992 { 00:20:22.992 "name": "pt1", 00:20:22.992 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:22.992 "is_configured": true, 00:20:22.992 "data_offset": 256, 00:20:22.992 "data_size": 7936 00:20:22.992 }, 00:20:22.992 { 00:20:22.992 "name": "pt2", 00:20:22.992 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:22.992 "is_configured": true, 00:20:22.992 "data_offset": 256, 00:20:22.992 "data_size": 7936 00:20:22.992 } 00:20:22.992 ] 00:20:22.992 } 00:20:22.992 } 00:20:22.992 }' 00:20:22.992 16:34:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:22.992 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:22.992 pt2' 00:20:22.992 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:22.992 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:20:22.992 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:22.992 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:22.992 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.992 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.992 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:22.992 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.251 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:23.251 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:23.251 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:23.251 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:23.251 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:23.251 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.251 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.251 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.251 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:23.251 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:23.251 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:23.251 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.251 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:23.251 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.252 [2024-11-05 16:34:36.126058] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=75739e31-b1d0-4b02-bd4c-282ff2faa25e 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 75739e31-b1d0-4b02-bd4c-282ff2faa25e ']' 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.252 [2024-11-05 16:34:36.169687] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:23.252 [2024-11-05 16:34:36.169712] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:23.252 [2024-11-05 16:34:36.169792] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:23.252 [2024-11-05 16:34:36.169848] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:23.252 [2024-11-05 16:34:36.169860] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.252 [2024-11-05 16:34:36.309498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:23.252 [2024-11-05 16:34:36.311406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:23.252 [2024-11-05 16:34:36.311490] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:23.252 [2024-11-05 16:34:36.311572] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:23.252 [2024-11-05 16:34:36.311588] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:23.252 [2024-11-05 16:34:36.311599] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:23.252 request: 00:20:23.252 { 00:20:23.252 "name": "raid_bdev1", 00:20:23.252 "raid_level": "raid1", 00:20:23.252 "base_bdevs": [ 00:20:23.252 "malloc1", 00:20:23.252 "malloc2" 00:20:23.252 ], 00:20:23.252 "superblock": false, 00:20:23.252 "method": "bdev_raid_create", 00:20:23.252 "req_id": 1 00:20:23.252 } 00:20:23.252 Got JSON-RPC error response 00:20:23.252 response: 00:20:23.252 { 00:20:23.252 "code": -17, 00:20:23.252 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:23.252 } 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:23.252 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.511 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:23.511 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:23.511 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:23.511 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.511 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.511 [2024-11-05 16:34:36.373346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:23.511 [2024-11-05 16:34:36.373401] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:23.511 [2024-11-05 16:34:36.373417] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:23.511 [2024-11-05 16:34:36.373427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:23.511 [2024-11-05 16:34:36.375256] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:23.511 [2024-11-05 16:34:36.375294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:23.511 [2024-11-05 16:34:36.375343] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:23.511 [2024-11-05 16:34:36.375423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:23.511 pt1 00:20:23.511 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.511 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:23.511 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:23.511 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:23.511 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:23.511 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:23.511 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:23.511 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:23.511 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:23.511 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:23.511 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:23.511 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.511 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.511 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.511 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.511 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.511 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:23.511 "name": "raid_bdev1", 00:20:23.511 "uuid": "75739e31-b1d0-4b02-bd4c-282ff2faa25e", 00:20:23.511 "strip_size_kb": 0, 00:20:23.511 "state": "configuring", 00:20:23.511 "raid_level": "raid1", 00:20:23.511 "superblock": true, 00:20:23.511 "num_base_bdevs": 2, 00:20:23.511 "num_base_bdevs_discovered": 1, 00:20:23.511 "num_base_bdevs_operational": 2, 00:20:23.511 "base_bdevs_list": [ 00:20:23.511 { 00:20:23.511 "name": "pt1", 00:20:23.511 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:23.511 "is_configured": true, 00:20:23.511 "data_offset": 256, 00:20:23.511 "data_size": 7936 00:20:23.511 }, 00:20:23.511 { 00:20:23.511 "name": null, 00:20:23.511 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:23.511 "is_configured": false, 00:20:23.511 "data_offset": 256, 00:20:23.511 "data_size": 7936 00:20:23.511 } 00:20:23.511 ] 00:20:23.511 }' 00:20:23.511 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:23.511 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.770 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:20:23.770 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:23.770 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:23.770 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:23.770 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.770 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.771 [2024-11-05 16:34:36.756695] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:23.771 [2024-11-05 16:34:36.756764] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:23.771 [2024-11-05 16:34:36.756786] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:23.771 [2024-11-05 16:34:36.756797] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:23.771 [2024-11-05 16:34:36.756970] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:23.771 [2024-11-05 16:34:36.756985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:23.771 [2024-11-05 16:34:36.757031] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:23.771 [2024-11-05 16:34:36.757056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:23.771 [2024-11-05 16:34:36.757142] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:23.771 [2024-11-05 16:34:36.757168] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:23.771 [2024-11-05 16:34:36.757243] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:23.771 [2024-11-05 16:34:36.757322] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:23.771 [2024-11-05 16:34:36.757349] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:23.771 [2024-11-05 16:34:36.757412] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:23.771 pt2 00:20:23.771 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.771 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:23.771 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:23.771 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:23.771 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:23.771 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:23.771 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:23.771 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:23.771 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:23.771 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:23.771 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:23.771 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:23.771 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:23.771 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.771 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.771 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.771 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.771 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.771 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:23.771 "name": "raid_bdev1", 00:20:23.771 "uuid": "75739e31-b1d0-4b02-bd4c-282ff2faa25e", 00:20:23.771 "strip_size_kb": 0, 00:20:23.771 "state": "online", 00:20:23.771 "raid_level": "raid1", 00:20:23.771 "superblock": true, 00:20:23.771 "num_base_bdevs": 2, 00:20:23.771 "num_base_bdevs_discovered": 2, 00:20:23.771 "num_base_bdevs_operational": 2, 00:20:23.771 "base_bdevs_list": [ 00:20:23.771 { 00:20:23.771 "name": "pt1", 00:20:23.771 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:23.771 "is_configured": true, 00:20:23.771 "data_offset": 256, 00:20:23.771 "data_size": 7936 00:20:23.771 }, 00:20:23.771 { 00:20:23.771 "name": "pt2", 00:20:23.771 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:23.771 "is_configured": true, 00:20:23.771 "data_offset": 256, 00:20:23.771 "data_size": 7936 00:20:23.771 } 00:20:23.771 ] 00:20:23.771 }' 00:20:23.771 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:23.771 16:34:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.340 [2024-11-05 16:34:37.188232] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:24.340 "name": "raid_bdev1", 00:20:24.340 "aliases": [ 00:20:24.340 "75739e31-b1d0-4b02-bd4c-282ff2faa25e" 00:20:24.340 ], 00:20:24.340 "product_name": "Raid Volume", 00:20:24.340 "block_size": 4128, 00:20:24.340 "num_blocks": 7936, 00:20:24.340 "uuid": "75739e31-b1d0-4b02-bd4c-282ff2faa25e", 00:20:24.340 "md_size": 32, 00:20:24.340 "md_interleave": true, 00:20:24.340 "dif_type": 0, 00:20:24.340 "assigned_rate_limits": { 00:20:24.340 "rw_ios_per_sec": 0, 00:20:24.340 "rw_mbytes_per_sec": 0, 00:20:24.340 "r_mbytes_per_sec": 0, 00:20:24.340 "w_mbytes_per_sec": 0 00:20:24.340 }, 00:20:24.340 "claimed": false, 00:20:24.340 "zoned": false, 00:20:24.340 "supported_io_types": { 00:20:24.340 "read": true, 00:20:24.340 "write": true, 00:20:24.340 "unmap": false, 00:20:24.340 "flush": false, 00:20:24.340 "reset": true, 00:20:24.340 "nvme_admin": false, 00:20:24.340 "nvme_io": false, 00:20:24.340 "nvme_io_md": false, 00:20:24.340 "write_zeroes": true, 00:20:24.340 "zcopy": false, 00:20:24.340 "get_zone_info": false, 00:20:24.340 "zone_management": false, 00:20:24.340 "zone_append": false, 00:20:24.340 "compare": false, 00:20:24.340 "compare_and_write": false, 00:20:24.340 "abort": false, 00:20:24.340 "seek_hole": false, 00:20:24.340 "seek_data": false, 00:20:24.340 "copy": false, 00:20:24.340 "nvme_iov_md": false 00:20:24.340 }, 00:20:24.340 "memory_domains": [ 00:20:24.340 { 00:20:24.340 "dma_device_id": "system", 00:20:24.340 "dma_device_type": 1 00:20:24.340 }, 00:20:24.340 { 00:20:24.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:24.340 "dma_device_type": 2 00:20:24.340 }, 00:20:24.340 { 00:20:24.340 "dma_device_id": "system", 00:20:24.340 "dma_device_type": 1 00:20:24.340 }, 00:20:24.340 { 00:20:24.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:24.340 "dma_device_type": 2 00:20:24.340 } 00:20:24.340 ], 00:20:24.340 "driver_specific": { 00:20:24.340 "raid": { 00:20:24.340 "uuid": "75739e31-b1d0-4b02-bd4c-282ff2faa25e", 00:20:24.340 "strip_size_kb": 0, 00:20:24.340 "state": "online", 00:20:24.340 "raid_level": "raid1", 00:20:24.340 "superblock": true, 00:20:24.340 "num_base_bdevs": 2, 00:20:24.340 "num_base_bdevs_discovered": 2, 00:20:24.340 "num_base_bdevs_operational": 2, 00:20:24.340 "base_bdevs_list": [ 00:20:24.340 { 00:20:24.340 "name": "pt1", 00:20:24.340 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:24.340 "is_configured": true, 00:20:24.340 "data_offset": 256, 00:20:24.340 "data_size": 7936 00:20:24.340 }, 00:20:24.340 { 00:20:24.340 "name": "pt2", 00:20:24.340 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:24.340 "is_configured": true, 00:20:24.340 "data_offset": 256, 00:20:24.340 "data_size": 7936 00:20:24.340 } 00:20:24.340 ] 00:20:24.340 } 00:20:24.340 } 00:20:24.340 }' 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:24.340 pt2' 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.340 [2024-11-05 16:34:37.371908] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 75739e31-b1d0-4b02-bd4c-282ff2faa25e '!=' 75739e31-b1d0-4b02-bd4c-282ff2faa25e ']' 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.340 [2024-11-05 16:34:37.415622] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:24.340 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:24.341 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:24.341 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:24.341 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:24.341 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:24.341 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:24.341 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:24.341 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:24.341 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.341 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.341 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.341 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.600 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.600 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:24.600 "name": "raid_bdev1", 00:20:24.600 "uuid": "75739e31-b1d0-4b02-bd4c-282ff2faa25e", 00:20:24.600 "strip_size_kb": 0, 00:20:24.600 "state": "online", 00:20:24.600 "raid_level": "raid1", 00:20:24.600 "superblock": true, 00:20:24.600 "num_base_bdevs": 2, 00:20:24.600 "num_base_bdevs_discovered": 1, 00:20:24.600 "num_base_bdevs_operational": 1, 00:20:24.600 "base_bdevs_list": [ 00:20:24.600 { 00:20:24.600 "name": null, 00:20:24.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.600 "is_configured": false, 00:20:24.600 "data_offset": 0, 00:20:24.600 "data_size": 7936 00:20:24.600 }, 00:20:24.600 { 00:20:24.600 "name": "pt2", 00:20:24.600 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:24.600 "is_configured": true, 00:20:24.600 "data_offset": 256, 00:20:24.600 "data_size": 7936 00:20:24.600 } 00:20:24.600 ] 00:20:24.600 }' 00:20:24.600 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:24.600 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.859 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:24.859 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.859 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.859 [2024-11-05 16:34:37.878783] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:24.859 [2024-11-05 16:34:37.878815] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:24.859 [2024-11-05 16:34:37.878897] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:24.859 [2024-11-05 16:34:37.878954] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:24.859 [2024-11-05 16:34:37.878969] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:24.859 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.859 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.859 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:24.859 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.859 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.859 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.859 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:24.859 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:24.859 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:24.859 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:24.859 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:24.859 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.859 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.859 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.859 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:24.859 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:24.860 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:24.860 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:24.860 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:20:24.860 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:24.860 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.860 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.860 [2024-11-05 16:34:37.938665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:24.860 [2024-11-05 16:34:37.938723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.860 [2024-11-05 16:34:37.938739] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:24.860 [2024-11-05 16:34:37.938750] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.860 [2024-11-05 16:34:37.940706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.860 [2024-11-05 16:34:37.940744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:24.860 [2024-11-05 16:34:37.940810] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:24.860 [2024-11-05 16:34:37.940857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:24.860 [2024-11-05 16:34:37.940922] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:24.860 [2024-11-05 16:34:37.940933] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:24.860 [2024-11-05 16:34:37.941019] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:24.860 [2024-11-05 16:34:37.941092] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:24.860 [2024-11-05 16:34:37.941099] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:24.860 [2024-11-05 16:34:37.941162] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:24.860 pt2 00:20:24.860 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.860 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:24.860 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:24.860 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:24.860 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:24.860 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:24.860 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:24.860 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:24.860 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:24.860 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:24.860 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:25.119 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.119 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.119 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.119 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.119 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.119 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:25.119 "name": "raid_bdev1", 00:20:25.119 "uuid": "75739e31-b1d0-4b02-bd4c-282ff2faa25e", 00:20:25.119 "strip_size_kb": 0, 00:20:25.119 "state": "online", 00:20:25.119 "raid_level": "raid1", 00:20:25.119 "superblock": true, 00:20:25.119 "num_base_bdevs": 2, 00:20:25.119 "num_base_bdevs_discovered": 1, 00:20:25.119 "num_base_bdevs_operational": 1, 00:20:25.119 "base_bdevs_list": [ 00:20:25.119 { 00:20:25.119 "name": null, 00:20:25.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.119 "is_configured": false, 00:20:25.119 "data_offset": 256, 00:20:25.119 "data_size": 7936 00:20:25.119 }, 00:20:25.119 { 00:20:25.119 "name": "pt2", 00:20:25.119 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:25.119 "is_configured": true, 00:20:25.119 "data_offset": 256, 00:20:25.119 "data_size": 7936 00:20:25.119 } 00:20:25.119 ] 00:20:25.119 }' 00:20:25.119 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:25.119 16:34:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.378 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:25.378 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.378 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.378 [2024-11-05 16:34:38.389873] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:25.378 [2024-11-05 16:34:38.389907] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:25.378 [2024-11-05 16:34:38.389983] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:25.378 [2024-11-05 16:34:38.390033] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:25.378 [2024-11-05 16:34:38.390041] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:25.378 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.378 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.378 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.378 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.378 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:25.378 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.378 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:25.378 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:25.378 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:20:25.378 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:25.378 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.378 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.378 [2024-11-05 16:34:38.449774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:25.378 [2024-11-05 16:34:38.449851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:25.378 [2024-11-05 16:34:38.449873] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:20:25.378 [2024-11-05 16:34:38.449882] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:25.378 [2024-11-05 16:34:38.451811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:25.378 [2024-11-05 16:34:38.451845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:25.378 [2024-11-05 16:34:38.451899] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:25.378 [2024-11-05 16:34:38.451944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:25.378 [2024-11-05 16:34:38.452042] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:25.378 [2024-11-05 16:34:38.452052] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:25.378 [2024-11-05 16:34:38.452070] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:25.378 [2024-11-05 16:34:38.452127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:25.378 [2024-11-05 16:34:38.452201] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:25.378 [2024-11-05 16:34:38.452210] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:25.378 [2024-11-05 16:34:38.452272] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:25.378 [2024-11-05 16:34:38.452333] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:25.378 [2024-11-05 16:34:38.452345] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:25.378 [2024-11-05 16:34:38.452413] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:25.378 pt1 00:20:25.378 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.378 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:20:25.378 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:25.378 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:25.378 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:25.378 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:25.378 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:25.378 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:25.378 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:25.378 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:25.378 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:25.378 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:25.378 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.378 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.378 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.378 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.637 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.637 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:25.637 "name": "raid_bdev1", 00:20:25.637 "uuid": "75739e31-b1d0-4b02-bd4c-282ff2faa25e", 00:20:25.637 "strip_size_kb": 0, 00:20:25.637 "state": "online", 00:20:25.637 "raid_level": "raid1", 00:20:25.637 "superblock": true, 00:20:25.637 "num_base_bdevs": 2, 00:20:25.637 "num_base_bdevs_discovered": 1, 00:20:25.637 "num_base_bdevs_operational": 1, 00:20:25.637 "base_bdevs_list": [ 00:20:25.637 { 00:20:25.637 "name": null, 00:20:25.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.637 "is_configured": false, 00:20:25.637 "data_offset": 256, 00:20:25.637 "data_size": 7936 00:20:25.637 }, 00:20:25.637 { 00:20:25.637 "name": "pt2", 00:20:25.637 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:25.637 "is_configured": true, 00:20:25.637 "data_offset": 256, 00:20:25.637 "data_size": 7936 00:20:25.637 } 00:20:25.637 ] 00:20:25.637 }' 00:20:25.637 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:25.637 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.896 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:25.896 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.896 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.896 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:25.896 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.896 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:25.896 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:25.896 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:25.896 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.896 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.896 [2024-11-05 16:34:38.925174] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:25.896 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.896 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 75739e31-b1d0-4b02-bd4c-282ff2faa25e '!=' 75739e31-b1d0-4b02-bd4c-282ff2faa25e ']' 00:20:25.896 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89075 00:20:25.896 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 89075 ']' 00:20:25.896 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 89075 00:20:25.896 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:20:25.896 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:25.896 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89075 00:20:26.155 16:34:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:26.155 16:34:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:26.155 killing process with pid 89075 00:20:26.155 16:34:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89075' 00:20:26.155 16:34:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@971 -- # kill 89075 00:20:26.155 [2024-11-05 16:34:39.003467] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:26.155 [2024-11-05 16:34:39.003589] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:26.155 [2024-11-05 16:34:39.003640] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:26.155 [2024-11-05 16:34:39.003655] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:26.155 16:34:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@976 -- # wait 89075 00:20:26.155 [2024-11-05 16:34:39.209230] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:27.559 16:34:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:20:27.559 00:20:27.559 real 0m5.918s 00:20:27.559 user 0m8.960s 00:20:27.559 sys 0m1.076s 00:20:27.559 16:34:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:27.559 16:34:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:27.559 ************************************ 00:20:27.559 END TEST raid_superblock_test_md_interleaved 00:20:27.559 ************************************ 00:20:27.559 16:34:40 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:20:27.559 16:34:40 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:20:27.559 16:34:40 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:27.559 16:34:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:27.559 ************************************ 00:20:27.559 START TEST raid_rebuild_test_sb_md_interleaved 00:20:27.559 ************************************ 00:20:27.559 16:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false false 00:20:27.559 16:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:27.559 16:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:27.559 16:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:27.559 16:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:27.559 16:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:20:27.559 16:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:27.559 16:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:27.559 16:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:27.559 16:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:27.559 16:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:27.559 16:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:27.559 16:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:27.559 16:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:27.559 16:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:27.559 16:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:27.559 16:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:27.559 16:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:27.559 16:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:27.559 16:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:27.559 16:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:27.559 16:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:27.559 16:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:27.559 16:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:27.559 16:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:27.559 16:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89405 00:20:27.559 16:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89405 00:20:27.559 16:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:27.559 16:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 89405 ']' 00:20:27.559 16:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:27.559 16:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:27.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:27.559 16:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:27.559 16:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:27.559 16:34:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:27.560 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:27.560 Zero copy mechanism will not be used. 00:20:27.560 [2024-11-05 16:34:40.448687] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:20:27.560 [2024-11-05 16:34:40.448816] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89405 ] 00:20:27.560 [2024-11-05 16:34:40.623580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.818 [2024-11-05 16:34:40.734911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.078 [2024-11-05 16:34:40.926013] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:28.078 [2024-11-05 16:34:40.926058] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:28.338 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:28.338 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:20:28.338 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:28.338 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:20:28.338 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.338 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:28.338 BaseBdev1_malloc 00:20:28.338 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.338 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:28.338 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.338 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:28.338 [2024-11-05 16:34:41.315317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:28.338 [2024-11-05 16:34:41.315375] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:28.338 [2024-11-05 16:34:41.315413] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:28.338 [2024-11-05 16:34:41.315425] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:28.338 [2024-11-05 16:34:41.317270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:28.338 [2024-11-05 16:34:41.317309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:28.338 BaseBdev1 00:20:28.338 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.338 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:28.338 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:20:28.338 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.338 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:28.338 BaseBdev2_malloc 00:20:28.338 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.338 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:28.338 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.338 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:28.338 [2024-11-05 16:34:41.369003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:28.338 [2024-11-05 16:34:41.369084] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:28.338 [2024-11-05 16:34:41.369105] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:28.338 [2024-11-05 16:34:41.369118] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:28.338 [2024-11-05 16:34:41.370953] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:28.338 [2024-11-05 16:34:41.370989] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:28.338 BaseBdev2 00:20:28.338 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.338 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:20:28.338 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.338 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:28.598 spare_malloc 00:20:28.598 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.598 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:28.598 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.598 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:28.598 spare_delay 00:20:28.598 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.598 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:28.598 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.598 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:28.598 [2024-11-05 16:34:41.450236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:28.598 [2024-11-05 16:34:41.450311] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:28.598 [2024-11-05 16:34:41.450343] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:28.598 [2024-11-05 16:34:41.450354] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:28.598 [2024-11-05 16:34:41.452172] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:28.598 [2024-11-05 16:34:41.452208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:28.598 spare 00:20:28.598 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.598 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:28.598 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.598 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:28.598 [2024-11-05 16:34:41.462250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:28.598 [2024-11-05 16:34:41.464018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:28.598 [2024-11-05 16:34:41.464225] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:28.598 [2024-11-05 16:34:41.464243] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:28.598 [2024-11-05 16:34:41.464320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:28.598 [2024-11-05 16:34:41.464391] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:28.598 [2024-11-05 16:34:41.464400] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:28.598 [2024-11-05 16:34:41.464471] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:28.598 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.598 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:28.598 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:28.598 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:28.598 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:28.598 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:28.598 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:28.598 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:28.598 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:28.598 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:28.598 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:28.598 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.598 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.598 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.598 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:28.598 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.598 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:28.598 "name": "raid_bdev1", 00:20:28.598 "uuid": "fac1f759-cf82-4ef6-afe8-56337aca8b45", 00:20:28.598 "strip_size_kb": 0, 00:20:28.598 "state": "online", 00:20:28.598 "raid_level": "raid1", 00:20:28.598 "superblock": true, 00:20:28.598 "num_base_bdevs": 2, 00:20:28.598 "num_base_bdevs_discovered": 2, 00:20:28.598 "num_base_bdevs_operational": 2, 00:20:28.598 "base_bdevs_list": [ 00:20:28.598 { 00:20:28.598 "name": "BaseBdev1", 00:20:28.598 "uuid": "d541ac1d-94e3-5746-be59-6e2724ad7a26", 00:20:28.598 "is_configured": true, 00:20:28.598 "data_offset": 256, 00:20:28.598 "data_size": 7936 00:20:28.598 }, 00:20:28.598 { 00:20:28.598 "name": "BaseBdev2", 00:20:28.598 "uuid": "94cb7d49-5aef-5781-aa92-5c71350c0170", 00:20:28.598 "is_configured": true, 00:20:28.598 "data_offset": 256, 00:20:28.598 "data_size": 7936 00:20:28.598 } 00:20:28.598 ] 00:20:28.598 }' 00:20:28.598 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:28.598 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:28.858 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:28.858 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:28.858 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.858 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:28.858 [2024-11-05 16:34:41.893825] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:28.858 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.858 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:20:28.858 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.858 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.858 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:28.858 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:28.858 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.117 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:20:29.117 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:29.117 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:20:29.117 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:29.117 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.117 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:29.117 [2024-11-05 16:34:41.973359] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:29.117 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.117 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:29.117 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:29.117 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:29.117 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:29.117 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:29.117 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:29.117 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:29.117 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:29.117 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:29.117 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:29.117 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.117 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.117 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.117 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:29.117 16:34:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.117 16:34:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:29.117 "name": "raid_bdev1", 00:20:29.117 "uuid": "fac1f759-cf82-4ef6-afe8-56337aca8b45", 00:20:29.117 "strip_size_kb": 0, 00:20:29.117 "state": "online", 00:20:29.117 "raid_level": "raid1", 00:20:29.117 "superblock": true, 00:20:29.117 "num_base_bdevs": 2, 00:20:29.117 "num_base_bdevs_discovered": 1, 00:20:29.117 "num_base_bdevs_operational": 1, 00:20:29.117 "base_bdevs_list": [ 00:20:29.117 { 00:20:29.117 "name": null, 00:20:29.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.117 "is_configured": false, 00:20:29.117 "data_offset": 0, 00:20:29.117 "data_size": 7936 00:20:29.117 }, 00:20:29.117 { 00:20:29.117 "name": "BaseBdev2", 00:20:29.117 "uuid": "94cb7d49-5aef-5781-aa92-5c71350c0170", 00:20:29.117 "is_configured": true, 00:20:29.117 "data_offset": 256, 00:20:29.117 "data_size": 7936 00:20:29.117 } 00:20:29.117 ] 00:20:29.117 }' 00:20:29.117 16:34:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:29.117 16:34:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:29.377 16:34:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:29.377 16:34:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.377 16:34:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:29.377 [2024-11-05 16:34:42.424666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:29.377 [2024-11-05 16:34:42.441433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:29.377 16:34:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.377 16:34:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:29.377 [2024-11-05 16:34:42.443284] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:30.756 16:34:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:30.756 16:34:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:30.756 16:34:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:30.756 16:34:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:30.756 16:34:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:30.756 16:34:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.756 16:34:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.756 16:34:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.756 16:34:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:30.756 16:34:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.756 16:34:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:30.756 "name": "raid_bdev1", 00:20:30.756 "uuid": "fac1f759-cf82-4ef6-afe8-56337aca8b45", 00:20:30.756 "strip_size_kb": 0, 00:20:30.756 "state": "online", 00:20:30.756 "raid_level": "raid1", 00:20:30.756 "superblock": true, 00:20:30.756 "num_base_bdevs": 2, 00:20:30.756 "num_base_bdevs_discovered": 2, 00:20:30.756 "num_base_bdevs_operational": 2, 00:20:30.756 "process": { 00:20:30.756 "type": "rebuild", 00:20:30.756 "target": "spare", 00:20:30.756 "progress": { 00:20:30.756 "blocks": 2560, 00:20:30.756 "percent": 32 00:20:30.756 } 00:20:30.756 }, 00:20:30.756 "base_bdevs_list": [ 00:20:30.756 { 00:20:30.756 "name": "spare", 00:20:30.756 "uuid": "9d8e45c5-0e43-5a4f-b3be-77ae81d6d23b", 00:20:30.756 "is_configured": true, 00:20:30.756 "data_offset": 256, 00:20:30.756 "data_size": 7936 00:20:30.756 }, 00:20:30.756 { 00:20:30.756 "name": "BaseBdev2", 00:20:30.756 "uuid": "94cb7d49-5aef-5781-aa92-5c71350c0170", 00:20:30.756 "is_configured": true, 00:20:30.756 "data_offset": 256, 00:20:30.756 "data_size": 7936 00:20:30.756 } 00:20:30.756 ] 00:20:30.756 }' 00:20:30.756 16:34:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:30.756 16:34:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:30.756 16:34:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:30.756 16:34:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:30.756 16:34:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:30.756 16:34:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.756 16:34:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:30.756 [2024-11-05 16:34:43.606461] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:30.756 [2024-11-05 16:34:43.648564] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:30.756 [2024-11-05 16:34:43.648635] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:30.756 [2024-11-05 16:34:43.648649] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:30.756 [2024-11-05 16:34:43.648667] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:30.756 16:34:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.756 16:34:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:30.756 16:34:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:30.756 16:34:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:30.756 16:34:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:30.756 16:34:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:30.756 16:34:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:30.756 16:34:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:30.756 16:34:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:30.756 16:34:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:30.756 16:34:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:30.756 16:34:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.756 16:34:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.756 16:34:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:30.756 16:34:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.756 16:34:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.756 16:34:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:30.756 "name": "raid_bdev1", 00:20:30.756 "uuid": "fac1f759-cf82-4ef6-afe8-56337aca8b45", 00:20:30.756 "strip_size_kb": 0, 00:20:30.756 "state": "online", 00:20:30.756 "raid_level": "raid1", 00:20:30.756 "superblock": true, 00:20:30.756 "num_base_bdevs": 2, 00:20:30.756 "num_base_bdevs_discovered": 1, 00:20:30.756 "num_base_bdevs_operational": 1, 00:20:30.756 "base_bdevs_list": [ 00:20:30.756 { 00:20:30.756 "name": null, 00:20:30.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.756 "is_configured": false, 00:20:30.756 "data_offset": 0, 00:20:30.756 "data_size": 7936 00:20:30.756 }, 00:20:30.756 { 00:20:30.756 "name": "BaseBdev2", 00:20:30.756 "uuid": "94cb7d49-5aef-5781-aa92-5c71350c0170", 00:20:30.756 "is_configured": true, 00:20:30.756 "data_offset": 256, 00:20:30.756 "data_size": 7936 00:20:30.756 } 00:20:30.756 ] 00:20:30.756 }' 00:20:30.756 16:34:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:30.756 16:34:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:31.325 16:34:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:31.325 16:34:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:31.325 16:34:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:31.325 16:34:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:31.325 16:34:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:31.325 16:34:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.325 16:34:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.325 16:34:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.325 16:34:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:31.325 16:34:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.325 16:34:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:31.325 "name": "raid_bdev1", 00:20:31.325 "uuid": "fac1f759-cf82-4ef6-afe8-56337aca8b45", 00:20:31.325 "strip_size_kb": 0, 00:20:31.325 "state": "online", 00:20:31.325 "raid_level": "raid1", 00:20:31.325 "superblock": true, 00:20:31.325 "num_base_bdevs": 2, 00:20:31.325 "num_base_bdevs_discovered": 1, 00:20:31.325 "num_base_bdevs_operational": 1, 00:20:31.325 "base_bdevs_list": [ 00:20:31.325 { 00:20:31.325 "name": null, 00:20:31.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.325 "is_configured": false, 00:20:31.325 "data_offset": 0, 00:20:31.325 "data_size": 7936 00:20:31.325 }, 00:20:31.325 { 00:20:31.325 "name": "BaseBdev2", 00:20:31.325 "uuid": "94cb7d49-5aef-5781-aa92-5c71350c0170", 00:20:31.325 "is_configured": true, 00:20:31.325 "data_offset": 256, 00:20:31.325 "data_size": 7936 00:20:31.325 } 00:20:31.325 ] 00:20:31.325 }' 00:20:31.325 16:34:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:31.325 16:34:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:31.325 16:34:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:31.325 16:34:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:31.325 16:34:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:31.325 16:34:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.325 16:34:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:31.325 [2024-11-05 16:34:44.258567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:31.325 [2024-11-05 16:34:44.274744] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:31.325 16:34:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.325 16:34:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:31.325 [2024-11-05 16:34:44.276643] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:32.263 16:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:32.263 16:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:32.263 16:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:32.263 16:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:32.263 16:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:32.263 16:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.263 16:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.263 16:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.263 16:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:32.263 16:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.263 16:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:32.263 "name": "raid_bdev1", 00:20:32.263 "uuid": "fac1f759-cf82-4ef6-afe8-56337aca8b45", 00:20:32.263 "strip_size_kb": 0, 00:20:32.263 "state": "online", 00:20:32.263 "raid_level": "raid1", 00:20:32.263 "superblock": true, 00:20:32.263 "num_base_bdevs": 2, 00:20:32.263 "num_base_bdevs_discovered": 2, 00:20:32.263 "num_base_bdevs_operational": 2, 00:20:32.263 "process": { 00:20:32.263 "type": "rebuild", 00:20:32.263 "target": "spare", 00:20:32.263 "progress": { 00:20:32.263 "blocks": 2560, 00:20:32.263 "percent": 32 00:20:32.263 } 00:20:32.263 }, 00:20:32.263 "base_bdevs_list": [ 00:20:32.263 { 00:20:32.263 "name": "spare", 00:20:32.263 "uuid": "9d8e45c5-0e43-5a4f-b3be-77ae81d6d23b", 00:20:32.263 "is_configured": true, 00:20:32.263 "data_offset": 256, 00:20:32.263 "data_size": 7936 00:20:32.263 }, 00:20:32.263 { 00:20:32.263 "name": "BaseBdev2", 00:20:32.263 "uuid": "94cb7d49-5aef-5781-aa92-5c71350c0170", 00:20:32.263 "is_configured": true, 00:20:32.263 "data_offset": 256, 00:20:32.263 "data_size": 7936 00:20:32.263 } 00:20:32.263 ] 00:20:32.263 }' 00:20:32.263 16:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:32.523 16:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:32.523 16:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:32.523 16:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:32.523 16:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:32.523 16:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:32.523 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:32.523 16:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:32.523 16:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:32.523 16:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:32.523 16:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=757 00:20:32.523 16:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:32.523 16:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:32.523 16:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:32.523 16:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:32.523 16:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:32.523 16:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:32.523 16:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.523 16:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.523 16:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.523 16:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:32.523 16:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.523 16:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:32.523 "name": "raid_bdev1", 00:20:32.523 "uuid": "fac1f759-cf82-4ef6-afe8-56337aca8b45", 00:20:32.523 "strip_size_kb": 0, 00:20:32.523 "state": "online", 00:20:32.523 "raid_level": "raid1", 00:20:32.523 "superblock": true, 00:20:32.523 "num_base_bdevs": 2, 00:20:32.523 "num_base_bdevs_discovered": 2, 00:20:32.523 "num_base_bdevs_operational": 2, 00:20:32.523 "process": { 00:20:32.523 "type": "rebuild", 00:20:32.523 "target": "spare", 00:20:32.523 "progress": { 00:20:32.523 "blocks": 2816, 00:20:32.523 "percent": 35 00:20:32.523 } 00:20:32.523 }, 00:20:32.523 "base_bdevs_list": [ 00:20:32.523 { 00:20:32.523 "name": "spare", 00:20:32.523 "uuid": "9d8e45c5-0e43-5a4f-b3be-77ae81d6d23b", 00:20:32.523 "is_configured": true, 00:20:32.523 "data_offset": 256, 00:20:32.523 "data_size": 7936 00:20:32.523 }, 00:20:32.523 { 00:20:32.523 "name": "BaseBdev2", 00:20:32.523 "uuid": "94cb7d49-5aef-5781-aa92-5c71350c0170", 00:20:32.523 "is_configured": true, 00:20:32.523 "data_offset": 256, 00:20:32.523 "data_size": 7936 00:20:32.523 } 00:20:32.523 ] 00:20:32.523 }' 00:20:32.523 16:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:32.523 16:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:32.523 16:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:32.523 16:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:32.523 16:34:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:33.901 16:34:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:33.901 16:34:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:33.901 16:34:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:33.901 16:34:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:33.901 16:34:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:33.901 16:34:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:33.901 16:34:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.901 16:34:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.901 16:34:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.901 16:34:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:33.901 16:34:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.901 16:34:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:33.901 "name": "raid_bdev1", 00:20:33.901 "uuid": "fac1f759-cf82-4ef6-afe8-56337aca8b45", 00:20:33.901 "strip_size_kb": 0, 00:20:33.901 "state": "online", 00:20:33.901 "raid_level": "raid1", 00:20:33.901 "superblock": true, 00:20:33.901 "num_base_bdevs": 2, 00:20:33.901 "num_base_bdevs_discovered": 2, 00:20:33.901 "num_base_bdevs_operational": 2, 00:20:33.901 "process": { 00:20:33.901 "type": "rebuild", 00:20:33.901 "target": "spare", 00:20:33.901 "progress": { 00:20:33.901 "blocks": 5888, 00:20:33.901 "percent": 74 00:20:33.901 } 00:20:33.901 }, 00:20:33.901 "base_bdevs_list": [ 00:20:33.901 { 00:20:33.901 "name": "spare", 00:20:33.901 "uuid": "9d8e45c5-0e43-5a4f-b3be-77ae81d6d23b", 00:20:33.901 "is_configured": true, 00:20:33.901 "data_offset": 256, 00:20:33.901 "data_size": 7936 00:20:33.901 }, 00:20:33.901 { 00:20:33.902 "name": "BaseBdev2", 00:20:33.902 "uuid": "94cb7d49-5aef-5781-aa92-5c71350c0170", 00:20:33.902 "is_configured": true, 00:20:33.902 "data_offset": 256, 00:20:33.902 "data_size": 7936 00:20:33.902 } 00:20:33.902 ] 00:20:33.902 }' 00:20:33.902 16:34:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:33.902 16:34:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:33.902 16:34:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:33.902 16:34:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:33.902 16:34:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:34.471 [2024-11-05 16:34:47.390079] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:34.471 [2024-11-05 16:34:47.390154] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:34.471 [2024-11-05 16:34:47.390273] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:34.731 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:34.731 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:34.731 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:34.731 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:34.731 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:34.731 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:34.731 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.731 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.731 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.731 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:34.731 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.731 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:34.731 "name": "raid_bdev1", 00:20:34.731 "uuid": "fac1f759-cf82-4ef6-afe8-56337aca8b45", 00:20:34.731 "strip_size_kb": 0, 00:20:34.731 "state": "online", 00:20:34.731 "raid_level": "raid1", 00:20:34.731 "superblock": true, 00:20:34.731 "num_base_bdevs": 2, 00:20:34.731 "num_base_bdevs_discovered": 2, 00:20:34.731 "num_base_bdevs_operational": 2, 00:20:34.731 "base_bdevs_list": [ 00:20:34.731 { 00:20:34.731 "name": "spare", 00:20:34.731 "uuid": "9d8e45c5-0e43-5a4f-b3be-77ae81d6d23b", 00:20:34.731 "is_configured": true, 00:20:34.731 "data_offset": 256, 00:20:34.731 "data_size": 7936 00:20:34.731 }, 00:20:34.731 { 00:20:34.731 "name": "BaseBdev2", 00:20:34.731 "uuid": "94cb7d49-5aef-5781-aa92-5c71350c0170", 00:20:34.731 "is_configured": true, 00:20:34.731 "data_offset": 256, 00:20:34.731 "data_size": 7936 00:20:34.731 } 00:20:34.731 ] 00:20:34.731 }' 00:20:34.731 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:34.731 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:34.731 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:34.991 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:34.991 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:20:34.991 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:34.991 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:34.991 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:34.991 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:34.991 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:34.991 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.991 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.991 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.991 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:34.991 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.991 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:34.991 "name": "raid_bdev1", 00:20:34.991 "uuid": "fac1f759-cf82-4ef6-afe8-56337aca8b45", 00:20:34.991 "strip_size_kb": 0, 00:20:34.991 "state": "online", 00:20:34.991 "raid_level": "raid1", 00:20:34.991 "superblock": true, 00:20:34.991 "num_base_bdevs": 2, 00:20:34.991 "num_base_bdevs_discovered": 2, 00:20:34.991 "num_base_bdevs_operational": 2, 00:20:34.991 "base_bdevs_list": [ 00:20:34.991 { 00:20:34.991 "name": "spare", 00:20:34.991 "uuid": "9d8e45c5-0e43-5a4f-b3be-77ae81d6d23b", 00:20:34.991 "is_configured": true, 00:20:34.991 "data_offset": 256, 00:20:34.991 "data_size": 7936 00:20:34.991 }, 00:20:34.991 { 00:20:34.991 "name": "BaseBdev2", 00:20:34.991 "uuid": "94cb7d49-5aef-5781-aa92-5c71350c0170", 00:20:34.991 "is_configured": true, 00:20:34.991 "data_offset": 256, 00:20:34.991 "data_size": 7936 00:20:34.991 } 00:20:34.991 ] 00:20:34.991 }' 00:20:34.991 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:34.991 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:34.991 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:34.991 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:34.991 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:34.991 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:34.991 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:34.991 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:34.991 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:34.991 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:34.991 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:34.991 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:34.991 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:34.991 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:34.991 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.991 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.991 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.991 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:34.991 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.991 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:34.991 "name": "raid_bdev1", 00:20:34.991 "uuid": "fac1f759-cf82-4ef6-afe8-56337aca8b45", 00:20:34.991 "strip_size_kb": 0, 00:20:34.991 "state": "online", 00:20:34.991 "raid_level": "raid1", 00:20:34.991 "superblock": true, 00:20:34.991 "num_base_bdevs": 2, 00:20:34.991 "num_base_bdevs_discovered": 2, 00:20:34.991 "num_base_bdevs_operational": 2, 00:20:34.991 "base_bdevs_list": [ 00:20:34.991 { 00:20:34.991 "name": "spare", 00:20:34.991 "uuid": "9d8e45c5-0e43-5a4f-b3be-77ae81d6d23b", 00:20:34.991 "is_configured": true, 00:20:34.991 "data_offset": 256, 00:20:34.991 "data_size": 7936 00:20:34.991 }, 00:20:34.991 { 00:20:34.991 "name": "BaseBdev2", 00:20:34.991 "uuid": "94cb7d49-5aef-5781-aa92-5c71350c0170", 00:20:34.991 "is_configured": true, 00:20:34.991 "data_offset": 256, 00:20:34.991 "data_size": 7936 00:20:34.991 } 00:20:34.991 ] 00:20:34.991 }' 00:20:34.992 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:34.992 16:34:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:35.560 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:35.560 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.560 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:35.560 [2024-11-05 16:34:48.344090] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:35.560 [2024-11-05 16:34:48.344128] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:35.560 [2024-11-05 16:34:48.344232] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:35.560 [2024-11-05 16:34:48.344305] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:35.560 [2024-11-05 16:34:48.344317] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:35.560 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.560 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.560 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.560 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:35.560 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:20:35.560 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.560 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:35.560 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:20:35.560 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:35.560 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:35.560 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.560 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:35.560 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.560 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:35.560 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.560 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:35.560 [2024-11-05 16:34:48.411941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:35.560 [2024-11-05 16:34:48.412003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:35.560 [2024-11-05 16:34:48.412047] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:35.560 [2024-11-05 16:34:48.412057] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:35.560 [2024-11-05 16:34:48.413967] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:35.560 [2024-11-05 16:34:48.414002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:35.560 [2024-11-05 16:34:48.414059] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:35.560 [2024-11-05 16:34:48.414127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:35.560 [2024-11-05 16:34:48.414237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:35.560 spare 00:20:35.560 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.560 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:35.560 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.560 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:35.560 [2024-11-05 16:34:48.514147] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:35.560 [2024-11-05 16:34:48.514184] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:35.560 [2024-11-05 16:34:48.514320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:35.560 [2024-11-05 16:34:48.514424] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:35.560 [2024-11-05 16:34:48.514441] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:35.560 [2024-11-05 16:34:48.514570] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:35.560 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.560 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:35.560 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:35.560 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:35.560 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:35.560 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:35.560 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:35.560 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:35.560 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:35.560 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:35.560 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:35.560 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.560 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.560 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:35.560 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.560 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.560 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:35.560 "name": "raid_bdev1", 00:20:35.560 "uuid": "fac1f759-cf82-4ef6-afe8-56337aca8b45", 00:20:35.561 "strip_size_kb": 0, 00:20:35.561 "state": "online", 00:20:35.561 "raid_level": "raid1", 00:20:35.561 "superblock": true, 00:20:35.561 "num_base_bdevs": 2, 00:20:35.561 "num_base_bdevs_discovered": 2, 00:20:35.561 "num_base_bdevs_operational": 2, 00:20:35.561 "base_bdevs_list": [ 00:20:35.561 { 00:20:35.561 "name": "spare", 00:20:35.561 "uuid": "9d8e45c5-0e43-5a4f-b3be-77ae81d6d23b", 00:20:35.561 "is_configured": true, 00:20:35.561 "data_offset": 256, 00:20:35.561 "data_size": 7936 00:20:35.561 }, 00:20:35.561 { 00:20:35.561 "name": "BaseBdev2", 00:20:35.561 "uuid": "94cb7d49-5aef-5781-aa92-5c71350c0170", 00:20:35.561 "is_configured": true, 00:20:35.561 "data_offset": 256, 00:20:35.561 "data_size": 7936 00:20:35.561 } 00:20:35.561 ] 00:20:35.561 }' 00:20:35.561 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:35.561 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:36.129 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:36.129 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:36.129 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:36.129 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:36.129 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:36.129 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.129 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.129 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:36.129 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.129 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.129 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:36.129 "name": "raid_bdev1", 00:20:36.129 "uuid": "fac1f759-cf82-4ef6-afe8-56337aca8b45", 00:20:36.129 "strip_size_kb": 0, 00:20:36.129 "state": "online", 00:20:36.129 "raid_level": "raid1", 00:20:36.129 "superblock": true, 00:20:36.129 "num_base_bdevs": 2, 00:20:36.129 "num_base_bdevs_discovered": 2, 00:20:36.129 "num_base_bdevs_operational": 2, 00:20:36.129 "base_bdevs_list": [ 00:20:36.129 { 00:20:36.129 "name": "spare", 00:20:36.130 "uuid": "9d8e45c5-0e43-5a4f-b3be-77ae81d6d23b", 00:20:36.130 "is_configured": true, 00:20:36.130 "data_offset": 256, 00:20:36.130 "data_size": 7936 00:20:36.130 }, 00:20:36.130 { 00:20:36.130 "name": "BaseBdev2", 00:20:36.130 "uuid": "94cb7d49-5aef-5781-aa92-5c71350c0170", 00:20:36.130 "is_configured": true, 00:20:36.130 "data_offset": 256, 00:20:36.130 "data_size": 7936 00:20:36.130 } 00:20:36.130 ] 00:20:36.130 }' 00:20:36.130 16:34:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:36.130 16:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:36.130 16:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:36.130 16:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:36.130 16:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.130 16:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.130 16:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:36.130 16:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:36.130 16:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.130 16:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:36.130 16:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:36.130 16:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.130 16:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:36.130 [2024-11-05 16:34:49.086828] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:36.130 16:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.130 16:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:36.130 16:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:36.130 16:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:36.130 16:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:36.130 16:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:36.130 16:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:36.130 16:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:36.130 16:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:36.130 16:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:36.130 16:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:36.130 16:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.130 16:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.130 16:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:36.130 16:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.130 16:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.130 16:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:36.130 "name": "raid_bdev1", 00:20:36.130 "uuid": "fac1f759-cf82-4ef6-afe8-56337aca8b45", 00:20:36.130 "strip_size_kb": 0, 00:20:36.130 "state": "online", 00:20:36.130 "raid_level": "raid1", 00:20:36.130 "superblock": true, 00:20:36.130 "num_base_bdevs": 2, 00:20:36.130 "num_base_bdevs_discovered": 1, 00:20:36.130 "num_base_bdevs_operational": 1, 00:20:36.130 "base_bdevs_list": [ 00:20:36.130 { 00:20:36.130 "name": null, 00:20:36.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.130 "is_configured": false, 00:20:36.130 "data_offset": 0, 00:20:36.130 "data_size": 7936 00:20:36.130 }, 00:20:36.130 { 00:20:36.130 "name": "BaseBdev2", 00:20:36.130 "uuid": "94cb7d49-5aef-5781-aa92-5c71350c0170", 00:20:36.130 "is_configured": true, 00:20:36.130 "data_offset": 256, 00:20:36.130 "data_size": 7936 00:20:36.130 } 00:20:36.130 ] 00:20:36.130 }' 00:20:36.130 16:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:36.130 16:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:36.699 16:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:36.699 16:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.699 16:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:36.699 [2024-11-05 16:34:49.506204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:36.699 [2024-11-05 16:34:49.506416] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:36.699 [2024-11-05 16:34:49.506442] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:36.699 [2024-11-05 16:34:49.506484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:36.699 [2024-11-05 16:34:49.522582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:36.699 16:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.699 16:34:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:36.699 [2024-11-05 16:34:49.524444] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:37.676 16:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:37.676 16:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:37.676 16:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:37.676 16:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:37.676 16:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:37.676 16:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.676 16:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.676 16:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.676 16:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:37.676 16:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.676 16:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:37.676 "name": "raid_bdev1", 00:20:37.676 "uuid": "fac1f759-cf82-4ef6-afe8-56337aca8b45", 00:20:37.676 "strip_size_kb": 0, 00:20:37.676 "state": "online", 00:20:37.676 "raid_level": "raid1", 00:20:37.677 "superblock": true, 00:20:37.677 "num_base_bdevs": 2, 00:20:37.677 "num_base_bdevs_discovered": 2, 00:20:37.677 "num_base_bdevs_operational": 2, 00:20:37.677 "process": { 00:20:37.677 "type": "rebuild", 00:20:37.677 "target": "spare", 00:20:37.677 "progress": { 00:20:37.677 "blocks": 2560, 00:20:37.677 "percent": 32 00:20:37.677 } 00:20:37.677 }, 00:20:37.677 "base_bdevs_list": [ 00:20:37.677 { 00:20:37.677 "name": "spare", 00:20:37.677 "uuid": "9d8e45c5-0e43-5a4f-b3be-77ae81d6d23b", 00:20:37.677 "is_configured": true, 00:20:37.677 "data_offset": 256, 00:20:37.677 "data_size": 7936 00:20:37.677 }, 00:20:37.677 { 00:20:37.677 "name": "BaseBdev2", 00:20:37.677 "uuid": "94cb7d49-5aef-5781-aa92-5c71350c0170", 00:20:37.677 "is_configured": true, 00:20:37.677 "data_offset": 256, 00:20:37.677 "data_size": 7936 00:20:37.677 } 00:20:37.677 ] 00:20:37.677 }' 00:20:37.677 16:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:37.677 16:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:37.677 16:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:37.677 16:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:37.677 16:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:37.677 16:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.677 16:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:37.677 [2024-11-05 16:34:50.644062] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:37.677 [2024-11-05 16:34:50.729617] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:37.677 [2024-11-05 16:34:50.729700] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:37.677 [2024-11-05 16:34:50.729714] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:37.677 [2024-11-05 16:34:50.729723] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:37.962 16:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.962 16:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:37.962 16:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:37.962 16:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:37.962 16:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:37.962 16:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:37.962 16:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:37.962 16:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:37.962 16:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:37.962 16:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:37.962 16:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:37.962 16:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.962 16:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.962 16:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.962 16:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:37.962 16:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.962 16:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:37.963 "name": "raid_bdev1", 00:20:37.963 "uuid": "fac1f759-cf82-4ef6-afe8-56337aca8b45", 00:20:37.963 "strip_size_kb": 0, 00:20:37.963 "state": "online", 00:20:37.963 "raid_level": "raid1", 00:20:37.963 "superblock": true, 00:20:37.963 "num_base_bdevs": 2, 00:20:37.963 "num_base_bdevs_discovered": 1, 00:20:37.963 "num_base_bdevs_operational": 1, 00:20:37.963 "base_bdevs_list": [ 00:20:37.963 { 00:20:37.963 "name": null, 00:20:37.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.963 "is_configured": false, 00:20:37.963 "data_offset": 0, 00:20:37.963 "data_size": 7936 00:20:37.963 }, 00:20:37.963 { 00:20:37.963 "name": "BaseBdev2", 00:20:37.963 "uuid": "94cb7d49-5aef-5781-aa92-5c71350c0170", 00:20:37.963 "is_configured": true, 00:20:37.963 "data_offset": 256, 00:20:37.963 "data_size": 7936 00:20:37.963 } 00:20:37.963 ] 00:20:37.963 }' 00:20:37.963 16:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:37.963 16:34:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:38.222 16:34:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:38.222 16:34:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.222 16:34:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:38.222 [2024-11-05 16:34:51.185606] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:38.222 [2024-11-05 16:34:51.185673] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:38.222 [2024-11-05 16:34:51.185720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:38.222 [2024-11-05 16:34:51.185735] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:38.222 [2024-11-05 16:34:51.185937] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:38.222 [2024-11-05 16:34:51.185959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:38.223 [2024-11-05 16:34:51.186017] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:38.223 [2024-11-05 16:34:51.186035] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:38.223 [2024-11-05 16:34:51.186045] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:38.223 [2024-11-05 16:34:51.186073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:38.223 [2024-11-05 16:34:51.201628] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:38.223 spare 00:20:38.223 16:34:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.223 16:34:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:38.223 [2024-11-05 16:34:51.203491] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:39.160 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:39.160 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:39.160 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:39.160 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:39.160 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:39.160 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.160 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.160 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.160 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:39.160 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.420 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:39.420 "name": "raid_bdev1", 00:20:39.420 "uuid": "fac1f759-cf82-4ef6-afe8-56337aca8b45", 00:20:39.420 "strip_size_kb": 0, 00:20:39.420 "state": "online", 00:20:39.420 "raid_level": "raid1", 00:20:39.420 "superblock": true, 00:20:39.420 "num_base_bdevs": 2, 00:20:39.420 "num_base_bdevs_discovered": 2, 00:20:39.420 "num_base_bdevs_operational": 2, 00:20:39.420 "process": { 00:20:39.420 "type": "rebuild", 00:20:39.420 "target": "spare", 00:20:39.420 "progress": { 00:20:39.420 "blocks": 2560, 00:20:39.420 "percent": 32 00:20:39.420 } 00:20:39.420 }, 00:20:39.420 "base_bdevs_list": [ 00:20:39.420 { 00:20:39.420 "name": "spare", 00:20:39.420 "uuid": "9d8e45c5-0e43-5a4f-b3be-77ae81d6d23b", 00:20:39.420 "is_configured": true, 00:20:39.420 "data_offset": 256, 00:20:39.420 "data_size": 7936 00:20:39.420 }, 00:20:39.420 { 00:20:39.420 "name": "BaseBdev2", 00:20:39.420 "uuid": "94cb7d49-5aef-5781-aa92-5c71350c0170", 00:20:39.420 "is_configured": true, 00:20:39.420 "data_offset": 256, 00:20:39.420 "data_size": 7936 00:20:39.420 } 00:20:39.420 ] 00:20:39.420 }' 00:20:39.420 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:39.420 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:39.420 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:39.420 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:39.420 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:39.420 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.420 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:39.420 [2024-11-05 16:34:52.359091] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:39.420 [2024-11-05 16:34:52.408706] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:39.420 [2024-11-05 16:34:52.408780] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:39.420 [2024-11-05 16:34:52.408797] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:39.420 [2024-11-05 16:34:52.408804] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:39.420 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.420 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:39.420 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:39.420 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:39.420 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:39.420 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:39.420 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:39.420 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:39.420 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:39.420 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:39.420 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:39.420 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.420 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.420 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.420 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:39.420 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.420 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:39.420 "name": "raid_bdev1", 00:20:39.420 "uuid": "fac1f759-cf82-4ef6-afe8-56337aca8b45", 00:20:39.420 "strip_size_kb": 0, 00:20:39.420 "state": "online", 00:20:39.420 "raid_level": "raid1", 00:20:39.420 "superblock": true, 00:20:39.420 "num_base_bdevs": 2, 00:20:39.420 "num_base_bdevs_discovered": 1, 00:20:39.420 "num_base_bdevs_operational": 1, 00:20:39.420 "base_bdevs_list": [ 00:20:39.420 { 00:20:39.420 "name": null, 00:20:39.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.420 "is_configured": false, 00:20:39.420 "data_offset": 0, 00:20:39.420 "data_size": 7936 00:20:39.420 }, 00:20:39.420 { 00:20:39.420 "name": "BaseBdev2", 00:20:39.420 "uuid": "94cb7d49-5aef-5781-aa92-5c71350c0170", 00:20:39.420 "is_configured": true, 00:20:39.420 "data_offset": 256, 00:20:39.420 "data_size": 7936 00:20:39.420 } 00:20:39.420 ] 00:20:39.420 }' 00:20:39.420 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:39.420 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:39.989 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:39.989 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:39.989 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:39.989 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:39.989 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:39.989 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.989 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.989 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:39.989 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.989 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.989 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:39.989 "name": "raid_bdev1", 00:20:39.989 "uuid": "fac1f759-cf82-4ef6-afe8-56337aca8b45", 00:20:39.989 "strip_size_kb": 0, 00:20:39.989 "state": "online", 00:20:39.989 "raid_level": "raid1", 00:20:39.989 "superblock": true, 00:20:39.989 "num_base_bdevs": 2, 00:20:39.989 "num_base_bdevs_discovered": 1, 00:20:39.989 "num_base_bdevs_operational": 1, 00:20:39.989 "base_bdevs_list": [ 00:20:39.989 { 00:20:39.989 "name": null, 00:20:39.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.989 "is_configured": false, 00:20:39.989 "data_offset": 0, 00:20:39.989 "data_size": 7936 00:20:39.989 }, 00:20:39.989 { 00:20:39.989 "name": "BaseBdev2", 00:20:39.989 "uuid": "94cb7d49-5aef-5781-aa92-5c71350c0170", 00:20:39.989 "is_configured": true, 00:20:39.989 "data_offset": 256, 00:20:39.989 "data_size": 7936 00:20:39.989 } 00:20:39.989 ] 00:20:39.989 }' 00:20:39.989 16:34:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:39.989 16:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:39.989 16:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:39.989 16:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:39.989 16:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:39.989 16:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.989 16:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:39.989 16:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.989 16:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:39.989 16:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.989 16:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:40.249 [2024-11-05 16:34:53.079610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:40.249 [2024-11-05 16:34:53.079682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:40.249 [2024-11-05 16:34:53.079712] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:40.249 [2024-11-05 16:34:53.079723] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:40.249 [2024-11-05 16:34:53.079920] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:40.249 [2024-11-05 16:34:53.079940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:40.249 [2024-11-05 16:34:53.080028] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:40.249 [2024-11-05 16:34:53.080044] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:40.249 [2024-11-05 16:34:53.080054] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:40.249 [2024-11-05 16:34:53.080067] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:40.249 BaseBdev1 00:20:40.249 16:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.249 16:34:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:41.188 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:41.188 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:41.188 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:41.188 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:41.188 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:41.188 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:41.188 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:41.188 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:41.188 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:41.188 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:41.188 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.188 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.188 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.188 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:41.188 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.188 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:41.188 "name": "raid_bdev1", 00:20:41.188 "uuid": "fac1f759-cf82-4ef6-afe8-56337aca8b45", 00:20:41.188 "strip_size_kb": 0, 00:20:41.188 "state": "online", 00:20:41.188 "raid_level": "raid1", 00:20:41.188 "superblock": true, 00:20:41.188 "num_base_bdevs": 2, 00:20:41.188 "num_base_bdevs_discovered": 1, 00:20:41.188 "num_base_bdevs_operational": 1, 00:20:41.188 "base_bdevs_list": [ 00:20:41.188 { 00:20:41.188 "name": null, 00:20:41.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.188 "is_configured": false, 00:20:41.188 "data_offset": 0, 00:20:41.188 "data_size": 7936 00:20:41.188 }, 00:20:41.188 { 00:20:41.188 "name": "BaseBdev2", 00:20:41.188 "uuid": "94cb7d49-5aef-5781-aa92-5c71350c0170", 00:20:41.188 "is_configured": true, 00:20:41.188 "data_offset": 256, 00:20:41.188 "data_size": 7936 00:20:41.188 } 00:20:41.188 ] 00:20:41.188 }' 00:20:41.188 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:41.188 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:41.757 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:41.757 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:41.757 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:41.757 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:41.757 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:41.757 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.757 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.757 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:41.757 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.757 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.757 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:41.757 "name": "raid_bdev1", 00:20:41.757 "uuid": "fac1f759-cf82-4ef6-afe8-56337aca8b45", 00:20:41.757 "strip_size_kb": 0, 00:20:41.757 "state": "online", 00:20:41.757 "raid_level": "raid1", 00:20:41.757 "superblock": true, 00:20:41.757 "num_base_bdevs": 2, 00:20:41.757 "num_base_bdevs_discovered": 1, 00:20:41.757 "num_base_bdevs_operational": 1, 00:20:41.757 "base_bdevs_list": [ 00:20:41.757 { 00:20:41.757 "name": null, 00:20:41.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.757 "is_configured": false, 00:20:41.757 "data_offset": 0, 00:20:41.757 "data_size": 7936 00:20:41.757 }, 00:20:41.757 { 00:20:41.757 "name": "BaseBdev2", 00:20:41.757 "uuid": "94cb7d49-5aef-5781-aa92-5c71350c0170", 00:20:41.757 "is_configured": true, 00:20:41.757 "data_offset": 256, 00:20:41.757 "data_size": 7936 00:20:41.757 } 00:20:41.757 ] 00:20:41.757 }' 00:20:41.757 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:41.757 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:41.757 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:41.757 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:41.757 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:41.757 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:20:41.758 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:41.758 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:41.758 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:41.758 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:41.758 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:41.758 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:41.758 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.758 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:41.758 [2024-11-05 16:34:54.728798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:41.758 [2024-11-05 16:34:54.728965] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:41.758 [2024-11-05 16:34:54.728988] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:41.758 request: 00:20:41.758 { 00:20:41.758 "base_bdev": "BaseBdev1", 00:20:41.758 "raid_bdev": "raid_bdev1", 00:20:41.758 "method": "bdev_raid_add_base_bdev", 00:20:41.758 "req_id": 1 00:20:41.758 } 00:20:41.758 Got JSON-RPC error response 00:20:41.758 response: 00:20:41.758 { 00:20:41.758 "code": -22, 00:20:41.758 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:41.758 } 00:20:41.758 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:41.758 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:20:41.758 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:41.758 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:41.758 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:41.758 16:34:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:42.698 16:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:42.698 16:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:42.698 16:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:42.698 16:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:42.698 16:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:42.698 16:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:42.698 16:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:42.698 16:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:42.698 16:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:42.698 16:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:42.698 16:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.698 16:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.698 16:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.698 16:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:42.698 16:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.957 16:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:42.957 "name": "raid_bdev1", 00:20:42.957 "uuid": "fac1f759-cf82-4ef6-afe8-56337aca8b45", 00:20:42.957 "strip_size_kb": 0, 00:20:42.957 "state": "online", 00:20:42.957 "raid_level": "raid1", 00:20:42.957 "superblock": true, 00:20:42.957 "num_base_bdevs": 2, 00:20:42.957 "num_base_bdevs_discovered": 1, 00:20:42.957 "num_base_bdevs_operational": 1, 00:20:42.957 "base_bdevs_list": [ 00:20:42.957 { 00:20:42.957 "name": null, 00:20:42.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.957 "is_configured": false, 00:20:42.957 "data_offset": 0, 00:20:42.957 "data_size": 7936 00:20:42.957 }, 00:20:42.957 { 00:20:42.957 "name": "BaseBdev2", 00:20:42.957 "uuid": "94cb7d49-5aef-5781-aa92-5c71350c0170", 00:20:42.957 "is_configured": true, 00:20:42.957 "data_offset": 256, 00:20:42.957 "data_size": 7936 00:20:42.957 } 00:20:42.957 ] 00:20:42.957 }' 00:20:42.957 16:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:42.957 16:34:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:43.218 16:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:43.218 16:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:43.218 16:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:43.218 16:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:43.218 16:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:43.218 16:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.218 16:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.218 16:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.218 16:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:43.218 16:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.218 16:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:43.218 "name": "raid_bdev1", 00:20:43.218 "uuid": "fac1f759-cf82-4ef6-afe8-56337aca8b45", 00:20:43.218 "strip_size_kb": 0, 00:20:43.218 "state": "online", 00:20:43.218 "raid_level": "raid1", 00:20:43.218 "superblock": true, 00:20:43.218 "num_base_bdevs": 2, 00:20:43.218 "num_base_bdevs_discovered": 1, 00:20:43.218 "num_base_bdevs_operational": 1, 00:20:43.218 "base_bdevs_list": [ 00:20:43.218 { 00:20:43.218 "name": null, 00:20:43.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.218 "is_configured": false, 00:20:43.218 "data_offset": 0, 00:20:43.218 "data_size": 7936 00:20:43.218 }, 00:20:43.218 { 00:20:43.218 "name": "BaseBdev2", 00:20:43.218 "uuid": "94cb7d49-5aef-5781-aa92-5c71350c0170", 00:20:43.218 "is_configured": true, 00:20:43.218 "data_offset": 256, 00:20:43.218 "data_size": 7936 00:20:43.218 } 00:20:43.218 ] 00:20:43.218 }' 00:20:43.218 16:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:43.478 16:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:43.478 16:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:43.478 16:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:43.478 16:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89405 00:20:43.478 16:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 89405 ']' 00:20:43.478 16:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 89405 00:20:43.478 16:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:20:43.478 16:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:43.478 16:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89405 00:20:43.478 16:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:43.478 16:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:43.478 killing process with pid 89405 00:20:43.478 16:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89405' 00:20:43.478 16:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # kill 89405 00:20:43.478 Received shutdown signal, test time was about 60.000000 seconds 00:20:43.478 00:20:43.478 Latency(us) 00:20:43.478 [2024-11-05T16:34:56.566Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.478 [2024-11-05T16:34:56.566Z] =================================================================================================================== 00:20:43.478 [2024-11-05T16:34:56.566Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:43.478 [2024-11-05 16:34:56.424047] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:43.478 [2024-11-05 16:34:56.424189] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:43.478 16:34:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@976 -- # wait 89405 00:20:43.478 [2024-11-05 16:34:56.424243] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:43.478 [2024-11-05 16:34:56.424256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:43.738 [2024-11-05 16:34:56.720475] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:45.118 16:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:20:45.118 00:20:45.118 real 0m17.445s 00:20:45.118 user 0m22.793s 00:20:45.118 sys 0m1.698s 00:20:45.118 16:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:45.118 16:34:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:45.118 ************************************ 00:20:45.118 END TEST raid_rebuild_test_sb_md_interleaved 00:20:45.118 ************************************ 00:20:45.118 16:34:57 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:20:45.118 16:34:57 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:20:45.118 16:34:57 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89405 ']' 00:20:45.118 16:34:57 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89405 00:20:45.118 16:34:57 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:20:45.118 ************************************ 00:20:45.118 END TEST bdev_raid 00:20:45.118 ************************************ 00:20:45.118 00:20:45.118 real 12m19.880s 00:20:45.118 user 16m40.774s 00:20:45.118 sys 1m53.381s 00:20:45.118 16:34:57 bdev_raid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:45.118 16:34:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:45.118 16:34:57 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:45.118 16:34:57 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:45.118 16:34:57 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:45.118 16:34:57 -- common/autotest_common.sh@10 -- # set +x 00:20:45.118 ************************************ 00:20:45.118 START TEST spdkcli_raid 00:20:45.118 ************************************ 00:20:45.118 16:34:57 spdkcli_raid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:45.118 * Looking for test storage... 00:20:45.118 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:45.118 16:34:58 spdkcli_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:45.118 16:34:58 spdkcli_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:20:45.118 16:34:58 spdkcli_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:45.118 16:34:58 spdkcli_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:45.118 16:34:58 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:45.118 16:34:58 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:45.118 16:34:58 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:45.118 16:34:58 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:20:45.118 16:34:58 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:20:45.118 16:34:58 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:20:45.118 16:34:58 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:20:45.118 16:34:58 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:20:45.118 16:34:58 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:20:45.118 16:34:58 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:20:45.118 16:34:58 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:45.118 16:34:58 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:20:45.118 16:34:58 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:20:45.118 16:34:58 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:45.118 16:34:58 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:45.118 16:34:58 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:20:45.118 16:34:58 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:20:45.118 16:34:58 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:45.118 16:34:58 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:20:45.118 16:34:58 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:20:45.118 16:34:58 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:20:45.118 16:34:58 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:20:45.118 16:34:58 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:45.118 16:34:58 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:20:45.118 16:34:58 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:20:45.118 16:34:58 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:45.118 16:34:58 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:45.118 16:34:58 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:20:45.118 16:34:58 spdkcli_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:45.118 16:34:58 spdkcli_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:45.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.118 --rc genhtml_branch_coverage=1 00:20:45.118 --rc genhtml_function_coverage=1 00:20:45.118 --rc genhtml_legend=1 00:20:45.118 --rc geninfo_all_blocks=1 00:20:45.118 --rc geninfo_unexecuted_blocks=1 00:20:45.118 00:20:45.118 ' 00:20:45.118 16:34:58 spdkcli_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:45.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.118 --rc genhtml_branch_coverage=1 00:20:45.118 --rc genhtml_function_coverage=1 00:20:45.118 --rc genhtml_legend=1 00:20:45.118 --rc geninfo_all_blocks=1 00:20:45.118 --rc geninfo_unexecuted_blocks=1 00:20:45.118 00:20:45.118 ' 00:20:45.118 16:34:58 spdkcli_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:45.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.118 --rc genhtml_branch_coverage=1 00:20:45.118 --rc genhtml_function_coverage=1 00:20:45.118 --rc genhtml_legend=1 00:20:45.118 --rc geninfo_all_blocks=1 00:20:45.118 --rc geninfo_unexecuted_blocks=1 00:20:45.118 00:20:45.118 ' 00:20:45.118 16:34:58 spdkcli_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:45.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.118 --rc genhtml_branch_coverage=1 00:20:45.118 --rc genhtml_function_coverage=1 00:20:45.118 --rc genhtml_legend=1 00:20:45.118 --rc geninfo_all_blocks=1 00:20:45.118 --rc geninfo_unexecuted_blocks=1 00:20:45.118 00:20:45.119 ' 00:20:45.119 16:34:58 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:20:45.119 16:34:58 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:20:45.119 16:34:58 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:20:45.119 16:34:58 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:20:45.119 16:34:58 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:20:45.119 16:34:58 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:20:45.119 16:34:58 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:20:45.119 16:34:58 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:20:45.119 16:34:58 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:20:45.119 16:34:58 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:20:45.119 16:34:58 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:20:45.119 16:34:58 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:20:45.119 16:34:58 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:20:45.119 16:34:58 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:20:45.119 16:34:58 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:20:45.119 16:34:58 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:20:45.119 16:34:58 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:20:45.119 16:34:58 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:20:45.119 16:34:58 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:20:45.119 16:34:58 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:20:45.119 16:34:58 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:20:45.119 16:34:58 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:20:45.119 16:34:58 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:20:45.119 16:34:58 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:20:45.119 16:34:58 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:20:45.119 16:34:58 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:45.119 16:34:58 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:45.119 16:34:58 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:45.119 16:34:58 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:20:45.119 16:34:58 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:20:45.119 16:34:58 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:20:45.119 16:34:58 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:20:45.119 16:34:58 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:20:45.119 16:34:58 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:45.119 16:34:58 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:45.119 16:34:58 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:20:45.119 16:34:58 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90077 00:20:45.119 16:34:58 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:20:45.119 16:34:58 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90077 00:20:45.119 16:34:58 spdkcli_raid -- common/autotest_common.sh@833 -- # '[' -z 90077 ']' 00:20:45.119 16:34:58 spdkcli_raid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.119 16:34:58 spdkcli_raid -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:45.119 16:34:58 spdkcli_raid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.119 16:34:58 spdkcli_raid -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:45.119 16:34:58 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:45.378 [2024-11-05 16:34:58.306168] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:20:45.378 [2024-11-05 16:34:58.306367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90077 ] 00:20:45.636 [2024-11-05 16:34:58.479795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:45.636 [2024-11-05 16:34:58.593543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.636 [2024-11-05 16:34:58.593621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.577 16:34:59 spdkcli_raid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:46.577 16:34:59 spdkcli_raid -- common/autotest_common.sh@866 -- # return 0 00:20:46.577 16:34:59 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:20:46.577 16:34:59 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:46.577 16:34:59 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:46.577 16:34:59 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:20:46.577 16:34:59 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:46.577 16:34:59 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:46.577 16:34:59 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:20:46.577 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:20:46.577 ' 00:20:47.956 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:20:47.956 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:20:48.215 16:35:01 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:20:48.215 16:35:01 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:48.215 16:35:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:48.215 16:35:01 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:20:48.215 16:35:01 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:48.215 16:35:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:48.215 16:35:01 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:20:48.215 ' 00:20:49.154 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:20:49.413 16:35:02 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:20:49.413 16:35:02 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:49.413 16:35:02 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:49.413 16:35:02 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:20:49.413 16:35:02 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:49.413 16:35:02 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:49.413 16:35:02 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:20:49.413 16:35:02 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:20:49.982 16:35:02 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:20:49.982 16:35:02 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:20:49.982 16:35:02 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:20:49.982 16:35:02 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:49.982 16:35:02 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:49.982 16:35:02 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:20:49.982 16:35:02 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:49.982 16:35:02 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:49.982 16:35:02 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:20:49.982 ' 00:20:50.920 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:20:51.179 16:35:04 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:20:51.179 16:35:04 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:51.179 16:35:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:51.179 16:35:04 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:20:51.179 16:35:04 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:51.179 16:35:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:51.179 16:35:04 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:20:51.179 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:20:51.179 ' 00:20:52.561 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:20:52.561 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:20:52.561 16:35:05 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:20:52.561 16:35:05 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:52.561 16:35:05 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:52.561 16:35:05 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90077 00:20:52.561 16:35:05 spdkcli_raid -- common/autotest_common.sh@952 -- # '[' -z 90077 ']' 00:20:52.561 16:35:05 spdkcli_raid -- common/autotest_common.sh@956 -- # kill -0 90077 00:20:52.561 16:35:05 spdkcli_raid -- common/autotest_common.sh@957 -- # uname 00:20:52.561 16:35:05 spdkcli_raid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:52.561 16:35:05 spdkcli_raid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90077 00:20:52.561 killing process with pid 90077 00:20:52.561 16:35:05 spdkcli_raid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:52.561 16:35:05 spdkcli_raid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:52.561 16:35:05 spdkcli_raid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90077' 00:20:52.561 16:35:05 spdkcli_raid -- common/autotest_common.sh@971 -- # kill 90077 00:20:52.561 16:35:05 spdkcli_raid -- common/autotest_common.sh@976 -- # wait 90077 00:20:55.211 16:35:07 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:20:55.211 16:35:07 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90077 ']' 00:20:55.211 16:35:07 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90077 00:20:55.211 16:35:07 spdkcli_raid -- common/autotest_common.sh@952 -- # '[' -z 90077 ']' 00:20:55.211 16:35:07 spdkcli_raid -- common/autotest_common.sh@956 -- # kill -0 90077 00:20:55.211 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (90077) - No such process 00:20:55.211 Process with pid 90077 is not found 00:20:55.211 16:35:07 spdkcli_raid -- common/autotest_common.sh@979 -- # echo 'Process with pid 90077 is not found' 00:20:55.211 16:35:07 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:20:55.211 16:35:07 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:20:55.211 16:35:07 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:20:55.211 16:35:07 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:20:55.211 00:20:55.211 real 0m10.058s 00:20:55.211 user 0m20.685s 00:20:55.211 sys 0m1.130s 00:20:55.211 16:35:08 spdkcli_raid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:55.211 16:35:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:55.211 ************************************ 00:20:55.211 END TEST spdkcli_raid 00:20:55.211 ************************************ 00:20:55.211 16:35:08 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:20:55.211 16:35:08 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:55.211 16:35:08 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:55.211 16:35:08 -- common/autotest_common.sh@10 -- # set +x 00:20:55.211 ************************************ 00:20:55.211 START TEST blockdev_raid5f 00:20:55.211 ************************************ 00:20:55.211 16:35:08 blockdev_raid5f -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:20:55.211 * Looking for test storage... 00:20:55.211 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:20:55.211 16:35:08 blockdev_raid5f -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:55.211 16:35:08 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lcov --version 00:20:55.211 16:35:08 blockdev_raid5f -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:55.211 16:35:08 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:55.211 16:35:08 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:55.211 16:35:08 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:55.211 16:35:08 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:55.211 16:35:08 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:20:55.211 16:35:08 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:20:55.211 16:35:08 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:20:55.211 16:35:08 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:20:55.211 16:35:08 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:20:55.211 16:35:08 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:20:55.211 16:35:08 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:20:55.211 16:35:08 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:55.211 16:35:08 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:20:55.211 16:35:08 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:20:55.211 16:35:08 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:55.211 16:35:08 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:55.211 16:35:08 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:20:55.211 16:35:08 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:20:55.211 16:35:08 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:55.211 16:35:08 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:20:55.211 16:35:08 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:20:55.211 16:35:08 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:20:55.211 16:35:08 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:20:55.211 16:35:08 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:55.211 16:35:08 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:20:55.211 16:35:08 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:20:55.211 16:35:08 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:55.212 16:35:08 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:55.212 16:35:08 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:20:55.212 16:35:08 blockdev_raid5f -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:55.212 16:35:08 blockdev_raid5f -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:55.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.212 --rc genhtml_branch_coverage=1 00:20:55.212 --rc genhtml_function_coverage=1 00:20:55.212 --rc genhtml_legend=1 00:20:55.212 --rc geninfo_all_blocks=1 00:20:55.212 --rc geninfo_unexecuted_blocks=1 00:20:55.212 00:20:55.212 ' 00:20:55.212 16:35:08 blockdev_raid5f -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:55.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.212 --rc genhtml_branch_coverage=1 00:20:55.212 --rc genhtml_function_coverage=1 00:20:55.212 --rc genhtml_legend=1 00:20:55.212 --rc geninfo_all_blocks=1 00:20:55.212 --rc geninfo_unexecuted_blocks=1 00:20:55.212 00:20:55.212 ' 00:20:55.212 16:35:08 blockdev_raid5f -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:55.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.212 --rc genhtml_branch_coverage=1 00:20:55.212 --rc genhtml_function_coverage=1 00:20:55.212 --rc genhtml_legend=1 00:20:55.212 --rc geninfo_all_blocks=1 00:20:55.212 --rc geninfo_unexecuted_blocks=1 00:20:55.212 00:20:55.212 ' 00:20:55.212 16:35:08 blockdev_raid5f -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:55.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.212 --rc genhtml_branch_coverage=1 00:20:55.212 --rc genhtml_function_coverage=1 00:20:55.212 --rc genhtml_legend=1 00:20:55.212 --rc geninfo_all_blocks=1 00:20:55.212 --rc geninfo_unexecuted_blocks=1 00:20:55.212 00:20:55.212 ' 00:20:55.212 16:35:08 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:20:55.212 16:35:08 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:20:55.212 16:35:08 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:20:55.212 16:35:08 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:55.212 16:35:08 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:20:55.212 16:35:08 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:20:55.212 16:35:08 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:20:55.212 16:35:08 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:20:55.212 16:35:08 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:20:55.212 16:35:08 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:20:55.212 16:35:08 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:20:55.212 16:35:08 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:20:55.212 16:35:08 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:20:55.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.212 16:35:08 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:20:55.212 16:35:08 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:20:55.212 16:35:08 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:20:55.212 16:35:08 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:20:55.212 16:35:08 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:20:55.212 16:35:08 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:20:55.212 16:35:08 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:20:55.212 16:35:08 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:20:55.212 16:35:08 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:20:55.212 16:35:08 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:20:55.212 16:35:08 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:20:55.212 16:35:08 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90357 00:20:55.212 16:35:08 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:55.212 16:35:08 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90357 00:20:55.212 16:35:08 blockdev_raid5f -- common/autotest_common.sh@833 -- # '[' -z 90357 ']' 00:20:55.212 16:35:08 blockdev_raid5f -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.212 16:35:08 blockdev_raid5f -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:55.212 16:35:08 blockdev_raid5f -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.212 16:35:08 blockdev_raid5f -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:55.212 16:35:08 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:20:55.212 16:35:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:55.472 [2024-11-05 16:35:08.385429] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:20:55.472 [2024-11-05 16:35:08.385701] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90357 ] 00:20:55.732 [2024-11-05 16:35:08.564002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.732 [2024-11-05 16:35:08.676884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.674 16:35:09 blockdev_raid5f -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:56.674 16:35:09 blockdev_raid5f -- common/autotest_common.sh@866 -- # return 0 00:20:56.674 16:35:09 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:20:56.674 16:35:09 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:20:56.674 16:35:09 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:20:56.674 16:35:09 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.674 16:35:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:56.674 Malloc0 00:20:56.674 Malloc1 00:20:56.674 Malloc2 00:20:56.674 16:35:09 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.674 16:35:09 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:20:56.674 16:35:09 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.674 16:35:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:56.674 16:35:09 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.674 16:35:09 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:20:56.674 16:35:09 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:20:56.674 16:35:09 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.674 16:35:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:56.674 16:35:09 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.674 16:35:09 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:20:56.674 16:35:09 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.674 16:35:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:56.674 16:35:09 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.674 16:35:09 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:20:56.674 16:35:09 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.674 16:35:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:56.674 16:35:09 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.674 16:35:09 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:20:56.674 16:35:09 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:20:56.674 16:35:09 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.674 16:35:09 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:20:56.674 16:35:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:56.934 16:35:09 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.934 16:35:09 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:20:56.934 16:35:09 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "dc5cfdf9-349a-45be-aa7b-53c220199114"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "dc5cfdf9-349a-45be-aa7b-53c220199114",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "dc5cfdf9-349a-45be-aa7b-53c220199114",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "6b2b577c-b32f-439a-84ab-4ab777138ab8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "a96b2bad-f7c2-44a9-8a96-2ce252890e1a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "e4aa7e37-8fd6-43eb-8294-b7f786be8827",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:20:56.934 16:35:09 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:20:56.934 16:35:09 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:20:56.934 16:35:09 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:20:56.934 16:35:09 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:20:56.934 16:35:09 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 90357 00:20:56.934 16:35:09 blockdev_raid5f -- common/autotest_common.sh@952 -- # '[' -z 90357 ']' 00:20:56.934 16:35:09 blockdev_raid5f -- common/autotest_common.sh@956 -- # kill -0 90357 00:20:56.934 16:35:09 blockdev_raid5f -- common/autotest_common.sh@957 -- # uname 00:20:56.934 16:35:09 blockdev_raid5f -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:56.934 16:35:09 blockdev_raid5f -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90357 00:20:56.934 killing process with pid 90357 00:20:56.934 16:35:09 blockdev_raid5f -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:56.934 16:35:09 blockdev_raid5f -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:56.934 16:35:09 blockdev_raid5f -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90357' 00:20:56.934 16:35:09 blockdev_raid5f -- common/autotest_common.sh@971 -- # kill 90357 00:20:56.934 16:35:09 blockdev_raid5f -- common/autotest_common.sh@976 -- # wait 90357 00:20:59.470 16:35:12 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:59.470 16:35:12 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:20:59.470 16:35:12 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:20:59.470 16:35:12 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:59.470 16:35:12 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:59.470 ************************************ 00:20:59.470 START TEST bdev_hello_world 00:20:59.470 ************************************ 00:20:59.470 16:35:12 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:20:59.729 [2024-11-05 16:35:12.568758] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:20:59.729 [2024-11-05 16:35:12.568876] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90419 ] 00:20:59.729 [2024-11-05 16:35:12.740873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.988 [2024-11-05 16:35:12.854649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.556 [2024-11-05 16:35:13.364110] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:21:00.556 [2024-11-05 16:35:13.364154] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:21:00.556 [2024-11-05 16:35:13.364169] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:21:00.556 [2024-11-05 16:35:13.364644] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:21:00.556 [2024-11-05 16:35:13.364769] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:21:00.556 [2024-11-05 16:35:13.364785] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:21:00.556 [2024-11-05 16:35:13.364830] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:21:00.556 00:21:00.556 [2024-11-05 16:35:13.364850] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:21:01.933 00:21:01.933 real 0m2.240s 00:21:01.933 user 0m1.880s 00:21:01.933 sys 0m0.231s 00:21:01.933 16:35:14 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:01.933 16:35:14 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:21:01.933 ************************************ 00:21:01.933 END TEST bdev_hello_world 00:21:01.933 ************************************ 00:21:01.933 16:35:14 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:21:01.933 16:35:14 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:01.933 16:35:14 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:01.933 16:35:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:01.933 ************************************ 00:21:01.933 START TEST bdev_bounds 00:21:01.933 ************************************ 00:21:01.933 16:35:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:21:01.933 Process bdevio pid: 90466 00:21:01.933 16:35:14 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90466 00:21:01.933 16:35:14 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:21:01.933 16:35:14 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:21:01.933 16:35:14 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90466' 00:21:01.933 16:35:14 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90466 00:21:01.933 16:35:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 90466 ']' 00:21:01.933 16:35:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.933 16:35:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:01.933 16:35:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.933 16:35:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:01.933 16:35:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:21:01.933 [2024-11-05 16:35:14.874120] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:21:01.933 [2024-11-05 16:35:14.874331] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90466 ] 00:21:02.192 [2024-11-05 16:35:15.031718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:02.192 [2024-11-05 16:35:15.144951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:02.192 [2024-11-05 16:35:15.145112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.192 [2024-11-05 16:35:15.145150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:02.760 16:35:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:02.760 16:35:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:21:02.760 16:35:15 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:21:02.760 I/O targets: 00:21:02.760 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:21:02.760 00:21:02.760 00:21:02.760 CUnit - A unit testing framework for C - Version 2.1-3 00:21:02.760 http://cunit.sourceforge.net/ 00:21:02.760 00:21:02.760 00:21:02.760 Suite: bdevio tests on: raid5f 00:21:02.760 Test: blockdev write read block ...passed 00:21:02.760 Test: blockdev write zeroes read block ...passed 00:21:02.760 Test: blockdev write zeroes read no split ...passed 00:21:03.020 Test: blockdev write zeroes read split ...passed 00:21:03.020 Test: blockdev write zeroes read split partial ...passed 00:21:03.020 Test: blockdev reset ...passed 00:21:03.020 Test: blockdev write read 8 blocks ...passed 00:21:03.020 Test: blockdev write read size > 128k ...passed 00:21:03.020 Test: blockdev write read invalid size ...passed 00:21:03.020 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:03.020 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:03.020 Test: blockdev write read max offset ...passed 00:21:03.020 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:03.020 Test: blockdev writev readv 8 blocks ...passed 00:21:03.020 Test: blockdev writev readv 30 x 1block ...passed 00:21:03.020 Test: blockdev writev readv block ...passed 00:21:03.020 Test: blockdev writev readv size > 128k ...passed 00:21:03.020 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:03.020 Test: blockdev comparev and writev ...passed 00:21:03.020 Test: blockdev nvme passthru rw ...passed 00:21:03.020 Test: blockdev nvme passthru vendor specific ...passed 00:21:03.020 Test: blockdev nvme admin passthru ...passed 00:21:03.020 Test: blockdev copy ...passed 00:21:03.020 00:21:03.020 Run Summary: Type Total Ran Passed Failed Inactive 00:21:03.020 suites 1 1 n/a 0 0 00:21:03.020 tests 23 23 23 0 0 00:21:03.020 asserts 130 130 130 0 n/a 00:21:03.020 00:21:03.020 Elapsed time = 0.563 seconds 00:21:03.020 0 00:21:03.020 16:35:16 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90466 00:21:03.020 16:35:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 90466 ']' 00:21:03.020 16:35:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 90466 00:21:03.020 16:35:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:21:03.020 16:35:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:03.020 16:35:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90466 00:21:03.020 16:35:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:03.020 16:35:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:03.020 16:35:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90466' 00:21:03.020 killing process with pid 90466 00:21:03.020 16:35:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@971 -- # kill 90466 00:21:03.020 16:35:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@976 -- # wait 90466 00:21:04.926 16:35:17 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:21:04.926 00:21:04.926 real 0m2.707s 00:21:04.926 user 0m6.765s 00:21:04.926 sys 0m0.364s 00:21:04.926 16:35:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:04.926 16:35:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:21:04.926 ************************************ 00:21:04.926 END TEST bdev_bounds 00:21:04.926 ************************************ 00:21:04.926 16:35:17 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:21:04.926 16:35:17 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:21:04.926 16:35:17 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:04.926 16:35:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:04.926 ************************************ 00:21:04.926 START TEST bdev_nbd 00:21:04.926 ************************************ 00:21:04.926 16:35:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:21:04.926 16:35:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:21:04.926 16:35:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:21:04.926 16:35:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:04.926 16:35:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:04.926 16:35:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:21:04.926 16:35:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:21:04.926 16:35:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:21:04.926 16:35:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:21:04.926 16:35:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:21:04.926 16:35:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:21:04.926 16:35:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:21:04.926 16:35:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:21:04.926 16:35:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:21:04.926 16:35:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:21:04.926 16:35:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:21:04.926 16:35:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90531 00:21:04.926 16:35:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:21:04.926 16:35:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:21:04.926 16:35:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90531 /var/tmp/spdk-nbd.sock 00:21:04.926 16:35:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 90531 ']' 00:21:04.926 16:35:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:21:04.926 16:35:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:04.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:21:04.926 16:35:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:21:04.926 16:35:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:04.926 16:35:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:21:04.926 [2024-11-05 16:35:17.660128] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:21:04.926 [2024-11-05 16:35:17.660355] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:04.926 [2024-11-05 16:35:17.816764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.926 [2024-11-05 16:35:17.926889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.495 16:35:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:05.495 16:35:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:21:05.495 16:35:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:21:05.495 16:35:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:05.495 16:35:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:21:05.495 16:35:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:21:05.495 16:35:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:21:05.495 16:35:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:05.495 16:35:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:21:05.495 16:35:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:21:05.495 16:35:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:21:05.495 16:35:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:21:05.495 16:35:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:21:05.495 16:35:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:21:05.495 16:35:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:21:05.762 16:35:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:21:05.762 16:35:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:21:05.762 16:35:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:21:05.762 16:35:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:21:05.762 16:35:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:21:05.762 16:35:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:05.762 16:35:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:05.762 16:35:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:21:05.762 16:35:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:21:05.762 16:35:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:05.762 16:35:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:05.762 16:35:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:05.762 1+0 records in 00:21:05.762 1+0 records out 00:21:05.762 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036402 s, 11.3 MB/s 00:21:05.762 16:35:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:05.762 16:35:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:21:05.763 16:35:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:05.763 16:35:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:05.763 16:35:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:21:05.763 16:35:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:05.763 16:35:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:21:05.763 16:35:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:06.027 16:35:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:21:06.027 { 00:21:06.027 "nbd_device": "/dev/nbd0", 00:21:06.027 "bdev_name": "raid5f" 00:21:06.027 } 00:21:06.027 ]' 00:21:06.027 16:35:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:21:06.027 16:35:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:21:06.027 { 00:21:06.027 "nbd_device": "/dev/nbd0", 00:21:06.027 "bdev_name": "raid5f" 00:21:06.027 } 00:21:06.027 ]' 00:21:06.027 16:35:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:21:06.027 16:35:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:06.027 16:35:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:06.027 16:35:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:06.027 16:35:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:06.027 16:35:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:06.027 16:35:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:06.027 16:35:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:06.286 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:06.286 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:06.286 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:06.286 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:06.286 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:06.286 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:06.286 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:06.286 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:06.286 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:06.286 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:06.286 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:06.546 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:06.546 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:06.546 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:06.546 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:06.546 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:21:06.546 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:06.546 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:21:06.546 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:21:06.546 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:21:06.546 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:21:06.546 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:21:06.546 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:21:06.546 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:21:06.546 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:06.546 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:21:06.546 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:21:06.546 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:21:06.546 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:21:06.546 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:21:06.546 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:06.546 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:21:06.547 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:06.547 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:06.547 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:06.547 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:21:06.547 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:06.547 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:06.547 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:21:06.805 /dev/nbd0 00:21:06.805 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:06.805 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:06.805 16:35:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:21:06.805 16:35:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:21:06.805 16:35:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:06.805 16:35:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:06.805 16:35:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:21:06.805 16:35:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:21:06.805 16:35:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:06.805 16:35:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:06.805 16:35:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:06.805 1+0 records in 00:21:06.805 1+0 records out 00:21:06.805 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000514294 s, 8.0 MB/s 00:21:06.805 16:35:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:06.805 16:35:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:21:06.806 16:35:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:06.806 16:35:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:06.806 16:35:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:21:06.806 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:06.806 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:06.806 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:06.806 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:06.806 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:07.064 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:21:07.064 { 00:21:07.064 "nbd_device": "/dev/nbd0", 00:21:07.064 "bdev_name": "raid5f" 00:21:07.064 } 00:21:07.064 ]' 00:21:07.064 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:21:07.064 { 00:21:07.064 "nbd_device": "/dev/nbd0", 00:21:07.064 "bdev_name": "raid5f" 00:21:07.064 } 00:21:07.064 ]' 00:21:07.064 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:07.064 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:21:07.064 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:21:07.064 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:07.064 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:21:07.064 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:21:07.064 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:21:07.064 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:21:07.064 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:21:07.064 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:21:07.064 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:07.064 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:21:07.064 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:07.064 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:21:07.064 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:21:07.064 256+0 records in 00:21:07.064 256+0 records out 00:21:07.064 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00449624 s, 233 MB/s 00:21:07.064 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:07.064 16:35:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:21:07.064 256+0 records in 00:21:07.064 256+0 records out 00:21:07.064 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0322633 s, 32.5 MB/s 00:21:07.064 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:21:07.064 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:21:07.064 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:07.064 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:21:07.064 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:07.064 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:21:07.064 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:21:07.064 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:07.064 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:21:07.064 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:07.064 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:07.064 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:07.064 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:07.064 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:07.064 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:07.064 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:07.064 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:07.323 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:07.323 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:07.323 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:07.323 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:07.323 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:07.323 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:07.323 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:07.323 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:07.323 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:07.323 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:07.323 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:07.583 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:07.583 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:07.583 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:07.583 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:07.583 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:21:07.583 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:07.583 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:21:07.583 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:21:07.583 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:21:07.583 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:21:07.583 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:21:07.583 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:21:07.583 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:07.583 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:07.584 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:21:07.584 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:21:07.847 malloc_lvol_verify 00:21:07.847 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:21:08.125 a6ea2b81-2287-4dc9-be66-e56f1988d8d1 00:21:08.125 16:35:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:21:08.125 a719dbf9-6995-4dcc-8c04-9f81a5e0f52e 00:21:08.125 16:35:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:21:08.412 /dev/nbd0 00:21:08.412 16:35:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:21:08.412 16:35:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:21:08.412 16:35:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:21:08.412 16:35:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:21:08.412 16:35:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:21:08.412 mke2fs 1.47.0 (5-Feb-2023) 00:21:08.412 Discarding device blocks: 0/4096 done 00:21:08.412 Creating filesystem with 4096 1k blocks and 1024 inodes 00:21:08.412 00:21:08.412 Allocating group tables: 0/1 done 00:21:08.412 Writing inode tables: 0/1 done 00:21:08.412 Creating journal (1024 blocks): done 00:21:08.412 Writing superblocks and filesystem accounting information: 0/1 done 00:21:08.412 00:21:08.412 16:35:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:08.412 16:35:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:08.412 16:35:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:08.412 16:35:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:08.412 16:35:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:08.412 16:35:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:08.412 16:35:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:08.671 16:35:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:08.671 16:35:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:08.671 16:35:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:08.671 16:35:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:08.671 16:35:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:08.671 16:35:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:08.671 16:35:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:08.671 16:35:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:08.671 16:35:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90531 00:21:08.671 16:35:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 90531 ']' 00:21:08.671 16:35:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 90531 00:21:08.671 16:35:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:21:08.671 16:35:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:08.671 16:35:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90531 00:21:08.671 killing process with pid 90531 00:21:08.671 16:35:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:08.671 16:35:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:08.671 16:35:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90531' 00:21:08.671 16:35:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@971 -- # kill 90531 00:21:08.671 16:35:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@976 -- # wait 90531 00:21:10.049 16:35:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:21:10.049 00:21:10.049 real 0m5.547s 00:21:10.049 user 0m7.505s 00:21:10.049 sys 0m1.273s 00:21:10.049 16:35:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:10.049 16:35:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:21:10.049 ************************************ 00:21:10.049 END TEST bdev_nbd 00:21:10.049 ************************************ 00:21:10.308 16:35:23 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:21:10.308 16:35:23 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:21:10.308 16:35:23 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:21:10.308 16:35:23 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:21:10.308 16:35:23 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:10.308 16:35:23 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:10.308 16:35:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:10.308 ************************************ 00:21:10.308 START TEST bdev_fio 00:21:10.308 ************************************ 00:21:10.308 16:35:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1127 -- # fio_test_suite '' 00:21:10.308 16:35:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:21:10.308 16:35:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:21:10.308 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:21:10.308 16:35:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:21:10.308 16:35:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:21:10.308 16:35:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:21:10.308 16:35:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:21:10.308 16:35:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:21:10.308 16:35:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:10.308 16:35:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=verify 00:21:10.308 16:35:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type=AIO 00:21:10.308 16:35:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:21:10.308 16:35:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:21:10.308 16:35:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:21:10.308 16:35:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z verify ']' 00:21:10.308 16:35:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:21:10.308 16:35:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:10.308 16:35:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:21:10.308 16:35:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1315 -- # '[' verify == verify ']' 00:21:10.308 16:35:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1316 -- # cat 00:21:10.308 16:35:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # '[' AIO == AIO ']' 00:21:10.308 16:35:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --version 00:21:10.308 16:35:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:21:10.308 16:35:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # echo serialize_overlap=1 00:21:10.308 16:35:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:10.308 16:35:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:21:10.308 16:35:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:21:10.308 16:35:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:21:10.308 16:35:23 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:10.308 16:35:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1103 -- # '[' 11 -le 1 ']' 00:21:10.308 16:35:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:10.308 16:35:23 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:21:10.308 ************************************ 00:21:10.308 START TEST bdev_fio_rw_verify 00:21:10.308 ************************************ 00:21:10.309 16:35:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1127 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:10.309 16:35:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:10.309 16:35:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:21:10.309 16:35:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:10.309 16:35:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local sanitizers 00:21:10.309 16:35:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:10.309 16:35:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # shift 00:21:10.309 16:35:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # local asan_lib= 00:21:10.309 16:35:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:21:10.309 16:35:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # grep libasan 00:21:10.309 16:35:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:10.309 16:35:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:21:10.309 16:35:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:10.309 16:35:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:10.309 16:35:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # break 00:21:10.309 16:35:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:10.309 16:35:23 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:10.568 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:10.568 fio-3.35 00:21:10.568 Starting 1 thread 00:21:22.786 00:21:22.786 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90732: Tue Nov 5 16:35:34 2024 00:21:22.786 read: IOPS=11.4k, BW=44.5MiB/s (46.6MB/s)(445MiB/10000msec) 00:21:22.786 slat (nsec): min=17709, max=98501, avg=20831.65, stdev=2610.01 00:21:22.786 clat (usec): min=10, max=371, avg=139.76, stdev=50.54 00:21:22.786 lat (usec): min=30, max=395, avg=160.59, stdev=51.06 00:21:22.786 clat percentiles (usec): 00:21:22.786 | 50.000th=[ 143], 99.000th=[ 247], 99.900th=[ 277], 99.990th=[ 318], 00:21:22.786 | 99.999th=[ 355] 00:21:22.786 write: IOPS=12.0k, BW=46.7MiB/s (49.0MB/s)(462MiB/9884msec); 0 zone resets 00:21:22.786 slat (usec): min=8, max=250, avg=17.89, stdev= 3.97 00:21:22.786 clat (usec): min=59, max=1648, avg=320.12, stdev=48.51 00:21:22.786 lat (usec): min=76, max=1889, avg=338.00, stdev=49.98 00:21:22.786 clat percentiles (usec): 00:21:22.786 | 50.000th=[ 322], 99.000th=[ 437], 99.900th=[ 586], 99.990th=[ 963], 00:21:22.786 | 99.999th=[ 1565] 00:21:22.786 bw ( KiB/s): min=43608, max=49304, per=98.84%, avg=47308.89, stdev=1786.90, samples=19 00:21:22.786 iops : min=10902, max=12326, avg=11827.16, stdev=446.78, samples=19 00:21:22.786 lat (usec) : 20=0.01%, 50=0.01%, 100=12.79%, 250=39.38%, 500=47.74% 00:21:22.786 lat (usec) : 750=0.07%, 1000=0.02% 00:21:22.786 lat (msec) : 2=0.01% 00:21:22.786 cpu : usr=98.96%, sys=0.46%, ctx=26, majf=0, minf=9457 00:21:22.786 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:22.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.786 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.786 issued rwts: total=113860,118271,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:22.786 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:22.786 00:21:22.786 Run status group 0 (all jobs): 00:21:22.786 READ: bw=44.5MiB/s (46.6MB/s), 44.5MiB/s-44.5MiB/s (46.6MB/s-46.6MB/s), io=445MiB (466MB), run=10000-10000msec 00:21:22.786 WRITE: bw=46.7MiB/s (49.0MB/s), 46.7MiB/s-46.7MiB/s (49.0MB/s-49.0MB/s), io=462MiB (484MB), run=9884-9884msec 00:21:23.046 ----------------------------------------------------- 00:21:23.046 Suppressions used: 00:21:23.046 count bytes template 00:21:23.046 1 7 /usr/src/fio/parse.c 00:21:23.046 857 82272 /usr/src/fio/iolog.c 00:21:23.046 1 8 libtcmalloc_minimal.so 00:21:23.046 1 904 libcrypto.so 00:21:23.046 ----------------------------------------------------- 00:21:23.046 00:21:23.046 00:21:23.046 real 0m12.768s 00:21:23.046 user 0m12.946s 00:21:23.046 sys 0m0.621s 00:21:23.046 16:35:36 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:23.046 16:35:36 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:21:23.046 ************************************ 00:21:23.046 END TEST bdev_fio_rw_verify 00:21:23.046 ************************************ 00:21:23.046 16:35:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:21:23.046 16:35:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:23.306 16:35:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:21:23.306 16:35:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:23.306 16:35:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=trim 00:21:23.306 16:35:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type= 00:21:23.306 16:35:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:21:23.306 16:35:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:21:23.306 16:35:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:21:23.306 16:35:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z trim ']' 00:21:23.306 16:35:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:21:23.306 16:35:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:23.306 16:35:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:21:23.306 16:35:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1315 -- # '[' trim == verify ']' 00:21:23.306 16:35:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1330 -- # '[' trim == trim ']' 00:21:23.306 16:35:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1331 -- # echo rw=trimwrite 00:21:23.306 16:35:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "dc5cfdf9-349a-45be-aa7b-53c220199114"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "dc5cfdf9-349a-45be-aa7b-53c220199114",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "dc5cfdf9-349a-45be-aa7b-53c220199114",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "6b2b577c-b32f-439a-84ab-4ab777138ab8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "a96b2bad-f7c2-44a9-8a96-2ce252890e1a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "e4aa7e37-8fd6-43eb-8294-b7f786be8827",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:21:23.306 16:35:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:21:23.306 16:35:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:21:23.306 16:35:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:23.306 /home/vagrant/spdk_repo/spdk 00:21:23.306 16:35:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:21:23.306 16:35:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:21:23.306 16:35:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:21:23.306 00:21:23.306 real 0m13.036s 00:21:23.306 user 0m13.065s 00:21:23.306 sys 0m0.739s 00:21:23.306 16:35:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:23.306 16:35:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:21:23.306 ************************************ 00:21:23.306 END TEST bdev_fio 00:21:23.306 ************************************ 00:21:23.306 16:35:36 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:23.306 16:35:36 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:23.306 16:35:36 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:21:23.306 16:35:36 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:23.306 16:35:36 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:23.306 ************************************ 00:21:23.306 START TEST bdev_verify 00:21:23.306 ************************************ 00:21:23.306 16:35:36 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:23.306 [2024-11-05 16:35:36.354542] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:21:23.306 [2024-11-05 16:35:36.354664] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90897 ] 00:21:23.566 [2024-11-05 16:35:36.527396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:23.566 [2024-11-05 16:35:36.636181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.566 [2024-11-05 16:35:36.636212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.133 Running I/O for 5 seconds... 00:21:26.078 15130.00 IOPS, 59.10 MiB/s [2024-11-05T16:35:40.544Z] 14232.00 IOPS, 55.59 MiB/s [2024-11-05T16:35:41.481Z] 13634.00 IOPS, 53.26 MiB/s [2024-11-05T16:35:42.415Z] 12766.00 IOPS, 49.87 MiB/s [2024-11-05T16:35:42.415Z] 12274.00 IOPS, 47.95 MiB/s 00:21:29.327 Latency(us) 00:21:29.327 [2024-11-05T16:35:42.415Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.327 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:29.327 Verification LBA range: start 0x0 length 0x2000 00:21:29.327 raid5f : 5.02 5582.12 21.81 0.00 0.00 34574.69 329.11 35944.64 00:21:29.327 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:29.327 Verification LBA range: start 0x2000 length 0x2000 00:21:29.327 raid5f : 5.02 6668.74 26.05 0.00 0.00 28871.50 176.18 34342.01 00:21:29.327 [2024-11-05T16:35:42.415Z] =================================================================================================================== 00:21:29.327 [2024-11-05T16:35:42.415Z] Total : 12250.86 47.85 0.00 0.00 31471.59 176.18 35944.64 00:21:30.705 00:21:30.705 real 0m7.317s 00:21:30.705 user 0m13.550s 00:21:30.705 sys 0m0.267s 00:21:30.705 16:35:43 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:30.705 16:35:43 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:21:30.705 ************************************ 00:21:30.705 END TEST bdev_verify 00:21:30.705 ************************************ 00:21:30.705 16:35:43 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:30.705 16:35:43 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:21:30.705 16:35:43 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:30.705 16:35:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:30.705 ************************************ 00:21:30.705 START TEST bdev_verify_big_io 00:21:30.705 ************************************ 00:21:30.705 16:35:43 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:30.705 [2024-11-05 16:35:43.743087] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:21:30.705 [2024-11-05 16:35:43.743216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90990 ] 00:21:30.963 [2024-11-05 16:35:43.921106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:30.963 [2024-11-05 16:35:44.037477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.963 [2024-11-05 16:35:44.037724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:31.529 Running I/O for 5 seconds... 00:21:33.848 633.00 IOPS, 39.56 MiB/s [2024-11-05T16:35:47.870Z] 665.00 IOPS, 41.56 MiB/s [2024-11-05T16:35:48.804Z] 718.33 IOPS, 44.90 MiB/s [2024-11-05T16:35:49.740Z] 729.25 IOPS, 45.58 MiB/s [2024-11-05T16:35:49.998Z] 723.20 IOPS, 45.20 MiB/s 00:21:36.910 Latency(us) 00:21:36.910 [2024-11-05T16:35:49.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.910 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:36.910 Verification LBA range: start 0x0 length 0x200 00:21:36.910 raid5f : 5.30 323.16 20.20 0.00 0.00 9724686.40 243.26 421261.97 00:21:36.910 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:36.910 Verification LBA range: start 0x200 length 0x200 00:21:36.910 raid5f : 5.25 422.84 26.43 0.00 0.00 7519069.96 187.81 326020.14 00:21:36.910 [2024-11-05T16:35:49.998Z] =================================================================================================================== 00:21:36.910 [2024-11-05T16:35:49.998Z] Total : 746.00 46.62 0.00 0.00 8479471.83 187.81 421261.97 00:21:38.287 00:21:38.287 real 0m7.571s 00:21:38.287 user 0m14.042s 00:21:38.287 sys 0m0.270s 00:21:38.287 16:35:51 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:38.287 16:35:51 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:21:38.287 ************************************ 00:21:38.287 END TEST bdev_verify_big_io 00:21:38.287 ************************************ 00:21:38.287 16:35:51 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:38.287 16:35:51 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:21:38.287 16:35:51 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:38.287 16:35:51 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:38.287 ************************************ 00:21:38.287 START TEST bdev_write_zeroes 00:21:38.287 ************************************ 00:21:38.287 16:35:51 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:38.546 [2024-11-05 16:35:51.390632] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:21:38.546 [2024-11-05 16:35:51.390759] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91098 ] 00:21:38.546 [2024-11-05 16:35:51.570685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.804 [2024-11-05 16:35:51.678828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.371 Running I/O for 1 seconds... 00:21:40.315 28695.00 IOPS, 112.09 MiB/s 00:21:40.315 Latency(us) 00:21:40.315 [2024-11-05T16:35:53.403Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:40.315 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:40.315 raid5f : 1.01 28665.80 111.98 0.00 0.00 4451.55 1352.22 6124.32 00:21:40.315 [2024-11-05T16:35:53.403Z] =================================================================================================================== 00:21:40.315 [2024-11-05T16:35:53.403Z] Total : 28665.80 111.98 0.00 0.00 4451.55 1352.22 6124.32 00:21:41.688 00:21:41.688 real 0m3.242s 00:21:41.688 user 0m2.855s 00:21:41.688 sys 0m0.262s 00:21:41.688 16:35:54 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:41.688 16:35:54 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:21:41.688 ************************************ 00:21:41.688 END TEST bdev_write_zeroes 00:21:41.688 ************************************ 00:21:41.688 16:35:54 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:41.688 16:35:54 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:21:41.688 16:35:54 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:41.688 16:35:54 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:41.688 ************************************ 00:21:41.688 START TEST bdev_json_nonenclosed 00:21:41.688 ************************************ 00:21:41.688 16:35:54 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:41.688 [2024-11-05 16:35:54.691225] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:21:41.688 [2024-11-05 16:35:54.691338] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91151 ] 00:21:41.947 [2024-11-05 16:35:54.864071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.947 [2024-11-05 16:35:54.973596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.947 [2024-11-05 16:35:54.973699] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:21:41.947 [2024-11-05 16:35:54.973725] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:41.947 [2024-11-05 16:35:54.973735] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:42.205 00:21:42.205 real 0m0.613s 00:21:42.205 user 0m0.390s 00:21:42.205 sys 0m0.119s 00:21:42.205 16:35:55 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:42.205 16:35:55 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:21:42.205 ************************************ 00:21:42.205 END TEST bdev_json_nonenclosed 00:21:42.205 ************************************ 00:21:42.205 16:35:55 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:42.205 16:35:55 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:21:42.205 16:35:55 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:42.205 16:35:55 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:42.205 ************************************ 00:21:42.205 START TEST bdev_json_nonarray 00:21:42.205 ************************************ 00:21:42.205 16:35:55 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:42.464 [2024-11-05 16:35:55.360733] Starting SPDK v25.01-pre git sha1 f2120392b / DPDK 24.03.0 initialization... 00:21:42.464 [2024-11-05 16:35:55.360850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91171 ] 00:21:42.464 [2024-11-05 16:35:55.534925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.722 [2024-11-05 16:35:55.646908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.722 [2024-11-05 16:35:55.646995] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:21:42.722 [2024-11-05 16:35:55.647013] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:42.722 [2024-11-05 16:35:55.647033] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:42.981 00:21:42.981 real 0m0.615s 00:21:42.981 user 0m0.391s 00:21:42.981 sys 0m0.119s 00:21:42.981 16:35:55 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:42.981 16:35:55 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:21:42.981 ************************************ 00:21:42.981 END TEST bdev_json_nonarray 00:21:42.981 ************************************ 00:21:42.981 16:35:55 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:21:42.981 16:35:55 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:21:42.981 16:35:55 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:21:42.982 16:35:55 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:21:42.982 16:35:55 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:21:42.982 16:35:55 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:21:42.982 16:35:55 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:42.982 16:35:55 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:21:42.982 16:35:55 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:21:42.982 16:35:55 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:21:42.982 16:35:55 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:21:42.982 00:21:42.982 real 0m47.902s 00:21:42.982 user 1m4.922s 00:21:42.982 sys 0m4.709s 00:21:42.982 16:35:55 blockdev_raid5f -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:42.982 16:35:55 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:42.982 ************************************ 00:21:42.982 END TEST blockdev_raid5f 00:21:42.982 ************************************ 00:21:42.982 16:35:56 -- spdk/autotest.sh@194 -- # uname -s 00:21:42.982 16:35:56 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:21:42.982 16:35:56 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:21:42.982 16:35:56 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:21:42.982 16:35:56 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:21:42.982 16:35:56 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:21:42.982 16:35:56 -- spdk/autotest.sh@256 -- # timing_exit lib 00:21:42.982 16:35:56 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:42.982 16:35:56 -- common/autotest_common.sh@10 -- # set +x 00:21:43.241 16:35:56 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:21:43.241 16:35:56 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:21:43.241 16:35:56 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:21:43.241 16:35:56 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:21:43.241 16:35:56 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:21:43.241 16:35:56 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:21:43.241 16:35:56 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:21:43.241 16:35:56 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:21:43.241 16:35:56 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:21:43.241 16:35:56 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:21:43.241 16:35:56 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:21:43.241 16:35:56 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:21:43.241 16:35:56 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:21:43.241 16:35:56 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:21:43.241 16:35:56 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:21:43.241 16:35:56 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:21:43.241 16:35:56 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:21:43.241 16:35:56 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:21:43.241 16:35:56 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:21:43.241 16:35:56 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:21:43.241 16:35:56 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:43.241 16:35:56 -- common/autotest_common.sh@10 -- # set +x 00:21:43.241 16:35:56 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:21:43.241 16:35:56 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:21:43.241 16:35:56 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:21:43.241 16:35:56 -- common/autotest_common.sh@10 -- # set +x 00:21:45.776 INFO: APP EXITING 00:21:45.776 INFO: killing all VMs 00:21:45.776 INFO: killing vhost app 00:21:45.776 INFO: EXIT DONE 00:21:45.776 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:45.777 Waiting for block devices as requested 00:21:45.777 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:46.036 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:46.606 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:46.876 Cleaning 00:21:46.876 Removing: /var/run/dpdk/spdk0/config 00:21:46.876 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:46.876 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:46.876 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:46.876 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:46.876 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:46.876 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:46.876 Removing: /dev/shm/spdk_tgt_trace.pid57039 00:21:46.876 Removing: /var/run/dpdk/spdk0 00:21:46.876 Removing: /var/run/dpdk/spdk_pid56793 00:21:46.876 Removing: /var/run/dpdk/spdk_pid57039 00:21:46.876 Removing: /var/run/dpdk/spdk_pid57274 00:21:46.876 Removing: /var/run/dpdk/spdk_pid57378 00:21:46.876 Removing: /var/run/dpdk/spdk_pid57434 00:21:46.876 Removing: /var/run/dpdk/spdk_pid57573 00:21:46.876 Removing: /var/run/dpdk/spdk_pid57597 00:21:46.876 Removing: /var/run/dpdk/spdk_pid57812 00:21:46.876 Removing: /var/run/dpdk/spdk_pid57930 00:21:46.876 Removing: /var/run/dpdk/spdk_pid58043 00:21:46.876 Removing: /var/run/dpdk/spdk_pid58176 00:21:46.876 Removing: /var/run/dpdk/spdk_pid58284 00:21:46.876 Removing: /var/run/dpdk/spdk_pid58329 00:21:46.876 Removing: /var/run/dpdk/spdk_pid58365 00:21:46.876 Removing: /var/run/dpdk/spdk_pid58436 00:21:46.876 Removing: /var/run/dpdk/spdk_pid58564 00:21:46.876 Removing: /var/run/dpdk/spdk_pid59013 00:21:46.876 Removing: /var/run/dpdk/spdk_pid59088 00:21:46.876 Removing: /var/run/dpdk/spdk_pid59170 00:21:46.876 Removing: /var/run/dpdk/spdk_pid59186 00:21:46.876 Removing: /var/run/dpdk/spdk_pid59346 00:21:46.876 Removing: /var/run/dpdk/spdk_pid59362 00:21:46.876 Removing: /var/run/dpdk/spdk_pid59516 00:21:46.876 Removing: /var/run/dpdk/spdk_pid59532 00:21:46.876 Removing: /var/run/dpdk/spdk_pid59607 00:21:46.876 Removing: /var/run/dpdk/spdk_pid59626 00:21:46.876 Removing: /var/run/dpdk/spdk_pid59701 00:21:46.876 Removing: /var/run/dpdk/spdk_pid59719 00:21:46.876 Removing: /var/run/dpdk/spdk_pid59923 00:21:46.876 Removing: /var/run/dpdk/spdk_pid59959 00:21:46.876 Removing: /var/run/dpdk/spdk_pid60048 00:21:46.876 Removing: /var/run/dpdk/spdk_pid61426 00:21:46.876 Removing: /var/run/dpdk/spdk_pid61637 00:21:46.876 Removing: /var/run/dpdk/spdk_pid61777 00:21:46.876 Removing: /var/run/dpdk/spdk_pid62427 00:21:46.876 Removing: /var/run/dpdk/spdk_pid62633 00:21:46.876 Removing: /var/run/dpdk/spdk_pid62779 00:21:46.876 Removing: /var/run/dpdk/spdk_pid63433 00:21:46.876 Removing: /var/run/dpdk/spdk_pid63771 00:21:46.876 Removing: /var/run/dpdk/spdk_pid63915 00:21:46.876 Removing: /var/run/dpdk/spdk_pid65309 00:21:46.876 Removing: /var/run/dpdk/spdk_pid65572 00:21:46.876 Removing: /var/run/dpdk/spdk_pid65713 00:21:46.876 Removing: /var/run/dpdk/spdk_pid67115 00:21:46.876 Removing: /var/run/dpdk/spdk_pid67370 00:21:46.876 Removing: /var/run/dpdk/spdk_pid67516 00:21:46.876 Removing: /var/run/dpdk/spdk_pid68924 00:21:46.876 Removing: /var/run/dpdk/spdk_pid69371 00:21:46.876 Removing: /var/run/dpdk/spdk_pid69511 00:21:46.876 Removing: /var/run/dpdk/spdk_pid71018 00:21:46.876 Removing: /var/run/dpdk/spdk_pid71289 00:21:46.876 Removing: /var/run/dpdk/spdk_pid71437 00:21:46.876 Removing: /var/run/dpdk/spdk_pid72939 00:21:46.876 Removing: /var/run/dpdk/spdk_pid73209 00:21:46.876 Removing: /var/run/dpdk/spdk_pid73359 00:21:46.876 Removing: /var/run/dpdk/spdk_pid74861 00:21:47.146 Removing: /var/run/dpdk/spdk_pid75354 00:21:47.146 Removing: /var/run/dpdk/spdk_pid75505 00:21:47.146 Removing: /var/run/dpdk/spdk_pid75649 00:21:47.146 Removing: /var/run/dpdk/spdk_pid76078 00:21:47.146 Removing: /var/run/dpdk/spdk_pid76819 00:21:47.146 Removing: /var/run/dpdk/spdk_pid77195 00:21:47.146 Removing: /var/run/dpdk/spdk_pid77885 00:21:47.146 Removing: /var/run/dpdk/spdk_pid78330 00:21:47.146 Removing: /var/run/dpdk/spdk_pid79089 00:21:47.146 Removing: /var/run/dpdk/spdk_pid79504 00:21:47.146 Removing: /var/run/dpdk/spdk_pid81473 00:21:47.146 Removing: /var/run/dpdk/spdk_pid81918 00:21:47.146 Removing: /var/run/dpdk/spdk_pid82369 00:21:47.146 Removing: /var/run/dpdk/spdk_pid84471 00:21:47.147 Removing: /var/run/dpdk/spdk_pid84958 00:21:47.147 Removing: /var/run/dpdk/spdk_pid85480 00:21:47.147 Removing: /var/run/dpdk/spdk_pid86538 00:21:47.147 Removing: /var/run/dpdk/spdk_pid86862 00:21:47.147 Removing: /var/run/dpdk/spdk_pid87807 00:21:47.147 Removing: /var/run/dpdk/spdk_pid88138 00:21:47.147 Removing: /var/run/dpdk/spdk_pid89075 00:21:47.147 Removing: /var/run/dpdk/spdk_pid89405 00:21:47.147 Removing: /var/run/dpdk/spdk_pid90077 00:21:47.147 Removing: /var/run/dpdk/spdk_pid90357 00:21:47.147 Removing: /var/run/dpdk/spdk_pid90419 00:21:47.147 Removing: /var/run/dpdk/spdk_pid90466 00:21:47.147 Removing: /var/run/dpdk/spdk_pid90711 00:21:47.147 Removing: /var/run/dpdk/spdk_pid90897 00:21:47.147 Removing: /var/run/dpdk/spdk_pid90990 00:21:47.147 Removing: /var/run/dpdk/spdk_pid91098 00:21:47.147 Removing: /var/run/dpdk/spdk_pid91151 00:21:47.147 Removing: /var/run/dpdk/spdk_pid91171 00:21:47.147 Clean 00:21:47.147 16:36:00 -- common/autotest_common.sh@1451 -- # return 0 00:21:47.147 16:36:00 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:21:47.147 16:36:00 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:47.147 16:36:00 -- common/autotest_common.sh@10 -- # set +x 00:21:47.147 16:36:00 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:21:47.147 16:36:00 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:47.147 16:36:00 -- common/autotest_common.sh@10 -- # set +x 00:21:47.147 16:36:00 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:47.147 16:36:00 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:47.147 16:36:00 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:47.147 16:36:00 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:21:47.147 16:36:00 -- spdk/autotest.sh@394 -- # hostname 00:21:47.405 16:36:00 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:47.405 geninfo: WARNING: invalid characters removed from testname! 00:22:09.343 16:36:21 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:11.886 16:36:24 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:13.796 16:36:26 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:15.704 16:36:28 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:17.718 16:36:30 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:19.627 16:36:32 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:21.535 16:36:34 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:21.535 16:36:34 -- spdk/autorun.sh@1 -- $ timing_finish 00:22:21.535 16:36:34 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:22:21.535 16:36:34 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:21.535 16:36:34 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:22:21.535 16:36:34 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:21.535 + [[ -n 5427 ]] 00:22:21.535 + sudo kill 5427 00:22:21.544 [Pipeline] } 00:22:21.560 [Pipeline] // timeout 00:22:21.565 [Pipeline] } 00:22:21.580 [Pipeline] // stage 00:22:21.585 [Pipeline] } 00:22:21.600 [Pipeline] // catchError 00:22:21.611 [Pipeline] stage 00:22:21.613 [Pipeline] { (Stop VM) 00:22:21.625 [Pipeline] sh 00:22:21.909 + vagrant halt 00:22:24.448 ==> default: Halting domain... 00:22:32.587 [Pipeline] sh 00:22:32.869 + vagrant destroy -f 00:22:35.407 ==> default: Removing domain... 00:22:35.419 [Pipeline] sh 00:22:35.702 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:22:35.712 [Pipeline] } 00:22:35.727 [Pipeline] // stage 00:22:35.733 [Pipeline] } 00:22:35.748 [Pipeline] // dir 00:22:35.753 [Pipeline] } 00:22:35.767 [Pipeline] // wrap 00:22:35.773 [Pipeline] } 00:22:35.786 [Pipeline] // catchError 00:22:35.795 [Pipeline] stage 00:22:35.797 [Pipeline] { (Epilogue) 00:22:35.810 [Pipeline] sh 00:22:36.095 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:41.389 [Pipeline] catchError 00:22:41.391 [Pipeline] { 00:22:41.404 [Pipeline] sh 00:22:41.688 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:41.688 Artifacts sizes are good 00:22:41.697 [Pipeline] } 00:22:41.712 [Pipeline] // catchError 00:22:41.723 [Pipeline] archiveArtifacts 00:22:41.731 Archiving artifacts 00:22:41.836 [Pipeline] cleanWs 00:22:41.848 [WS-CLEANUP] Deleting project workspace... 00:22:41.848 [WS-CLEANUP] Deferred wipeout is used... 00:22:41.855 [WS-CLEANUP] done 00:22:41.857 [Pipeline] } 00:22:41.873 [Pipeline] // stage 00:22:41.878 [Pipeline] } 00:22:41.892 [Pipeline] // node 00:22:41.898 [Pipeline] End of Pipeline 00:22:41.937 Finished: SUCCESS